I've avoided looking too deeply into cursive in the past because I naively assumed it would be difficult to make it look like like anything other than a late-90s BIOS, but this is exciting.
This looks like a "draw the rest of the fucking owl" thing. Yes, it's not a lot of manual work, but I wouldn't be able to do it at all because I don't have a sense for design.
I didn't downvote you, but as an outside observer I can see that folks might take issue with the fact that you start with a tl;dr, suggesting that you are summarizing the whole article, followedan in-depth analysis, and then ending by stating that you didn't read the whole thing.
Thank you very much. I was really excited to read the article and became very disappointed but my take was too hot and insufficient. I definitely should have done better. Thank you for expanding my perspective.
I'm not sure exactly how fast FAST REFRESH ON COMMIT is, but we at materialize are very fast, and getting faster. Once data is in materialized we can process streams to multiple downstream views with millisecond-level latencies. I'm working on improving our benchmarking capabilities so that we can provide less qualified answers to the question of "how fast is materialized?"
Much more interesting than our speed, though, in my opinion, is the fact that you can use materialized as the place where you do joins _across_ databases and file formats. It's particularly interesting in a microservices environment, where you may have a postgres db that was set up by one team and a mysql db that was set up by another team and the only thing you care about is doing some join or aggregate across the two: with materialized (and debezium) you can just stream the data into materialized and have continuously up to date views across services. Combine this with a graphql or rest postgres api layer (like hasura) and a large amount of CRUD code -- entire CRUD services that I've worked on in the past -- just disappears.
This is exactly what we do! This is a walkthrough of connecting a db (these docs are for mysql, but postgres works and is almost identical) via debezium and defining views in materialize: https://materialize.io/docs/demos/business-intelligence/
> if you give it a query that only requires certain result rows from one of its mat views, then Materialize is only going to compute the intermediate rows
This is absolutely correct!
> You can just have a bunch of “the same” Materialize node (i.e. every node just freestanding clone of a template node, with exactly the same sources and matviews) and then hit them with the parts of a map-reduce query
This should work, but we have been thinking about it/testing it differently internally. In general you should be able to create materialized views on different "shards" that have different `where` conditions, allowing you to control memory that way. This technique does require data that is actually partitionable in this way, same as it must be partitionable in mapreduce.
> this is all irrelevant the moment you write a query that needs a pure reduce
Of course, with materialize's sinks you can spin up a bunch of `materialized`s and connect them for a final reduce after data has gone through e.g. kafka or shared files. Being able to write joins and aggregates across heterogenous sources makes this kind of workload actually pretty pleasant.
> materialize has to be able to keep all of its state in memory
For now. We have a pretty good idea of what needs to be done to shed state to disk, and have designed to be able to implement it. We expect it to "just" be a matter of putting in the engineering effort.
> I've come across a few cases, when pushing python's type annotations to their limits, that force you to put the type names in string quotes
With pep-0563, python3.7, and `from __future__ import annotations` this should no longer be necessary. Those are some pretty detailed caveats, but I have been using it in places where I can and it is so nice.
I'm not experienced in this space, but you might be interested in the Real-Time For the Masses[1] project. The author of that is the lead of the embedded devices working group.
I agree, that is exactly the problem that I, in particular, think we are solving.