Yes. Project Mentat fits into a conceptual lineage that includes Freebase's graphd and 2005-onward Semantic Web stores. We have aimed for compatibility with Datomic and DataScript for least surprise, but if you squint there's a little AllegroGraph in the direction.
Datomic's model (both architectural and conceptual) draws from Clojure's concepts of persistence. That model isn't free, so Mentat deviates from it where it makes sense to do so: at present we don't implement querying of history or past states, for example, and when we do it won't be free.
We'll get closer to full Datomic-style datom store capabilities over time, but we'll make different performance tradeoffs.
Thanks for the explanation. For me the key takeaway is:
> at present we don't implement querying of history or past states, for example, and when we do it won't be free.
That's the bit of Datomic that intrigues me and gives the best use-case (I don't have to add data versioning and history in my app-layer) and what I'm looking for in other systems.
Note that we do store the full transaction log, just like Datomic, and Mentat will allow querying of it (and replication, and replay, and…). We haven't implemented history querying yet because we haven't needed it for application code.
The trick with Datomic is that every time you grab a `db` instance, it's a snapshot, and the system's index chunking and storage replication are necessarily built around the ability to continue using those older index chunks, potentially for a very long time.
Most consumers, most of the time, just want to query the store as it stands at that moment, but Datomic peers pay the space and time penalty of keeping and retrieving historical index chunks in order to answer those historical queries.
My current thoughts are:
1. To allow for short-term snapshot querying through something like `db.keep()`, implemented via a SQLite read transaction. That's not free: the database WAL will continue to grow until the read transaction is ended, so it isn't ideal for all workloads, but it'll do.
For some queries it's enough to simply track a last-seen tx value and filter everywhere, but that becomes difficult when cardinality-one and unique-identity properties are considered.
2. The obvious equivalent to Datomic's 'with' is an uncommitted write transaction. Naturally this blocks other writers while it exists, and so alternative implementations (e.g., writing to a complete disk copy of the database, or writing a 'delta' table) might make sense.
At some very hazy point in the future we might try to get SQLite support for this: after all, if we can guarantee that a write transaction won't be committed, we could use a separate WAL file for the `with` and avoid blocking other writers.
3. A longer-term approach to snapshots/DB-as-value is to materialize the datoms at the specified instant in time, either in a temporary table or in a real persisted table. That is: `db.keep_forever()` will give you a new structure to query, and calling code will be responsible for cleaning up that space.
The reason I say "won't be free" is that each of these operations imposes a cost when the feature is used: either SQLite or Mentat will have to do some work to allow an extended period of isolation, to reconstruct some state, or to persist some state.
That's in contrast to Datomic, which imposes some overhead every time index chunks are built or retrieved. It's also an interesting parallel to Clojure vs Rust: Clojure's data structures are persistent by default, giving you snapshots and safety at a cost everyone pays; Rust believes that you shouldn't pay for abstractions you don't use.
Sure - I'm highlighting the differences from DataScript/Datomic. My take is the 'inspiration' from each (especially DataScript) is quite loose.