I've been looking at Materielize for a while (https://materialize.com/). It can handle automatically refreshed materialized views. Last time I checked, it didn't support some Postgres SQL constructs that I use often, but I'm really looking forward to it.
I think that the problem is when you have a materialized view which takes hours to refresh. We are lucky that 99% of our traffic is during 7-19 on weekdays, so we can just refresh at night, but that won't work for others.
I don't know much about how postgresql works internally, so I just probably don't understand the constraints. Anyway as I understand, there are two ways to refresh. You either refresh a view concurrently or not.
If not, then postgres rebuilds the view from its definition on the side and at the end some internal structures are switched from the old to the new query result. Seems reasonable, but for some reason, which I don't understand due to my limited knowledge, an exclusive access lock is held for the entire duration of the refresh and all read queries are blocked, what doesn't work for us.
If you refresh concurrently, postgres rebuilds the view from its definition and compares the old and the new query result with a full outer join to compute a diff. The diff is then applied to the old data (like regular table INSERT/UPDATE/DELETE I assume), so I think you get away with just an exclusive lock and read access still works. There are two downsides to this, first that it requires a UNIQUE constraint for the join, second that the full outer join is a lot of additional work.
I never had the time to test Materialize, but it seems to do what I want with its continuous refresh.
I also thought about splitting the materialized view into two, one for rarely changing data and another one for smaller part of the data which changes daily. Then I would only have to refresh the smaller view and UNION ALL both materialized views in a regular view. Not sure how well will that work with postgres query planner.
Not sure about how that would work with the PG query planner either, but a batch for rarely changing data and rapid changing data is basically the Lambda data architecture, so probably a good call!
There's one gotcha with this approach: if there's another DDL operation running simultaneously with REFRESH MATERIALIZED VIEW, you'd get an internal postgres error.
You cannot be sure that refresh won't coincide with a grant on all tables in the schema, for example.
Mssql has "indexed views" which are automatically updated instantly... But they destroy your insert/update performance and their requirements are so draconian as to be completely impossible to ever actually use (no left joins, no subqueries, no self joins, etc...).
Yes, views are nice, but there is also a fair concept of not needlessly bogging down a table. Sure, they were making up data, but a flat table with stats, profile data and other easily external data is just bloat. Once you have an id then static fields can be retrieved from other services/data stores.
I think their point is more ‘don’t store all that junk in your primary database and then do all your work on it there too if you can just stuff it somewhere else’. Which has pros and cons and depends a lot on various scaling factors.
I'm pretty sure most engines use the term "materialized views" for eventual consistency tables. The only db I've seen with that kind of ACID materialized view is MS SQL, which calls them "indexed views".
It’s a good thing to use 2021 hardware. In 2001 you had to be more creative, as to have summary tables as materialized views would blow up the production database, lol.
On a unrelated note: Still hoping for those automatically refreshed materalized views in PostgreSQL, ala what VoltDB has.