Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Jepsen: NATS 2.12.1 (jepsen.io)
283 points by aphyr 8 hours ago | hide | past | favorite | 105 comments




Every time someone builds one of these things and skips over "overcomplicated theory", aphyr destroys them. At this point, I wonder if we could train an AI to look over a project's documentation, and predict whether it's likely to lose commmitted writes just based on the marketing / technical claims. We probably can.

/me strokes my long grey beard and nods

People always think "theory is overrated" or "hacking is better than having a school education"

And then proceed to shoot themselves in the foot with "workarounds" that break well known, well documented, well traversed problem spaces


certainly a narrative that is popular among the grey beard crowd, yes. in pretty much every field i've worked on, the opposite problem has been much much more common.

What fields? Cargo culting is annoying and definitely leads to suboptimal solutions and sometimes total misses, but I’ve rarely found that simply reading literature on a thorny topic prevents you from thinking outside the box. Most people I’ve seen work who were actually innovating (as in novel solutions and/or execution) understood the current SOTA of what they were working on inside and out.

what's the opposite problem statement?

People overly beholden to tried and true 'known' way of addressing a problem space and not considering/belittling alternatives. Many of the things that have been most aggressively 'bitter lesson'ed in the last decade fall into this category.

Like this bug report?

The things that have been "disrupted" haven't delivered - Blockchains are still a scam, Food delivery services are worse than before (Restaurants are worse off, the people making the deliveries are worse off), Taxis still needed to go back and vet drivers to ensure that they weren't fiends.


> Blockchains are still a scam

Did you actually look at the blockchain nodes implementation as of 2025 and what's in the roadmap? Ethereum nodes/L2s with optimistic or zk-proofs are probably the most advanced distributed databases that actually work.

(not talking about "coins" and stuff obviously, another debate)


> Ethereum nodes/L2s with optimistic or zk-proofs are probably the most advanced distributed databases that actually work.

What are you comparing against? Aren't they slower, less convenient, and less available than, say, DynamoDB or Spanner, both of which have been in full-service, reliable operation since 2012?


the big difference is the trust assumption, anyone can join or leave the network of nodes at any time

I think you are being downvoted because Ethereum requires you to stake 32 Eth (about $100k), and the entry queue right now is about 9 days and the exit queue is about 20 days. So only people with enough capital can join the network and it takes quite some time to join or leave as opposed to being able to do it at any time you want.


The traditional way is paper trails and/or WORM (write-once-read-many) devices, with local checksums.

You can have multiple replica without extra computation for hash and stuffs.


The ivory tower standing in the way of delivering value I think.

To be more specific, goals of perfection where perfection does not at all matter.

I've asked LLMs to do similar tasks and the results were very useful.

I can’t wait until it’s good enough to vibecode the next MongoDB.

NATS be trippin, no CAP.

Underrated

Wow. I’ve used NATS for best-effort in-memory pub/sub, which it has been great for, including getting subtle scaling details right. I never touched their persistence and would have investigated more before I did, but I wouldn’t have expected it to be this bad. Vulnerability to simple single-bit file corruption is embarrassing.

Sort of related. Jepsen and Antithesis recently released a glossary of common terms which is a fantastic reference.

https://jepsen.io/blog/2025-10-20-distsys-glossary


Curious about the differences between content on aphyr.com/tags/jepsen and jepsen.io/analyses. I recently discovered aphyr.com and was excited about the potential insights!

Jepsen started as a personal blog series in nights and weekends; jepsen.io is when I started doing it professionally, about ten years ago.

> 3.4 Lazy fsync by Default

Why? Why do some databases do that? To have better performance in benchmarks? It’s not like that it’s ok to do that if you have a better default or at least write a lot about it. But especially when you run stuff in a small cluster you get bitten by stuff like that.


It's not just better performance on latency benchmarks, it likely improves throughput as well because the writes will be batched together.

Many applications do not require true durability and it is likely that many applications benefit from lazy fsync. Whether it should be the default is a lot more questionable though.


It’s like using a non-cryptographically secure RNG: if you don’t know enough to look for the fsync flag off yourself, it’s unlikely you know enough to evaluate the impact of durability on your application.

> if you don’t know enough to look for the fsync flag off yourself,

Yeah, it should use safe-defaults.

Then you can always go read the corners of the docs for the "go faster" mode.

Just like Postgres's infamous "non-durable settings" page... https://www.postgresql.org/docs/18/non-durability.html


You can batch writes while at the same time not acknowledging them to clients until they are flushed, it just takes more bookkeeping.

For transactional durability, the writes will definitely be batched ("group commit"), because otherwise throughput would collapse.

I always wondered why the fsync has to be lazy. It seems like the fsync's can be bundled up together, and the notification messages held for a few millis while the write completes. Similar to TCP corking. There doesn't need to be one fsync per consensus.

Yes, good call! You can batch up multiple operations into a single call to fsync. You can also tune the number of milliseconds or bytes you're willing to buffer before calling `fsync` to balance latency and throughput. This is how databases like Postgres work by default--see the `commit_delay` option here: https://www.postgresql.org/docs/8.1/runtime-config-wal.html

> This is how databases like Postgres work by default--see the `commit_delay` option here: https://www.postgresql.org/docs/8.1/runtime-config-wal.html

I must note that the default for Postgres is that there is NO delay, which is a sane default.

> You can batch up multiple operations into a single call to fsync.

Ive done this in various messaging implementations for throughput, and it's actually fairly easy to do in most languages;

Basically, set up 1-N writers (depends on how you are storing data really) that takes a set of items containing the data to be written alongside a TaskCompletionSource (Promise in Java terms), when your stuff wants to write it shoots it to that local queue, the worker(s) on the queue will write out messages in batches based on whatever else (i.e. tuned for write size, number of records, etc for both throughput and guaranteeing forward progress,) and then when the write completes you either complete or fail the TCS/Promise.

If you've got the right 'glue' with your language/libraries it's not that hard; this example [0] from Akka.NET's SQL persistence layer shows how simple the actual write processor's logic can be... Yeah you have to think about queueing a little bit however I've found this basic pattern very adaptable (i.e. queueing op can just send a bunch of ready-to-go-bytes and you work off that for threshold instead, add framing if needed, etc.)

[0] https://github.com/akkadotnet/Akka.Persistence.Sql/blob/7bab...


Ah, pardon me, spoke too quickly! I remembered that it fsynced by default, and offered batching, and forgot that the batch size is 0 by default. My bad!

Well the write is still tunable so you are still correct.

Just wanted to clarify that the default is still at least safe in case people perusing this for things to worry about, well, were thinking about worrying.

Love all of your work and writings, thank you for all you do!


In some contexts (interrupts) we would call this "coalescing." (I don't work in databases, can't comment about terminology there.)

That was my immediate thought as well, under the assumption the lazy fsync is for performance. I imagine in some situations, delaying the write until the write confirmation actually happens is okay (depending on delay), but it also occurred to me that if you delay enough, and you have a busy enough system, and your time to send the message is small enough, the number of open connections you need to keep open can be some small or large multiple of the amount you would need without delaying the confirmation message to actual write time.

In practice, there must be a delay (from batching) if you fsync every transaction before acknowledging commit. The database would be unusably slow otherwise.

One of the perks of being distributed, I guess.

The kind of failure that a system can tolerate with strict fsync but can't tolerate with lazy fsync (i.e. the software 'confirms' a write to its caller but then crashes) is probably not the kind of failure you'd expect to encounter on a majority of your nodes all at the same time.


It is if they’re in the same physical datacenter. Usually the way this is done is to wait for at least M replicas to fsync, but only require the data to be in memory for the rest. It smooths out the tail latencies, which are quite high for SSDs.

> It smooths out the tail latencies, which are quite high for SSDs.

I'm sorry, tail latencies are high for SSDs? In my experience, the tail latencies are much higher for traditional rotating media (tens of seconds, vs 10s of milliseconds for SSDs).


They’re higher relative to median latencies for each. A high end SSD’s P99/median is higher than a high end HDD. That’s the relevant metric for request hedging.

It's approximately a factor of 1000x for both.

You can push the safety envelope a bit further and wait for your data to only be in memory in N separate fault domains. Yes, your favorite ultra-reliable cloud service may be doing this.

> To have better performance in benchmarks

Yes, exactly.


durability through replication and distribution and better throughput to build up more within the window on a lazy fsync

Massively improves benchmark performance. Like 5-10x

/dev/null is even faster.

/dev/null tends to lose a lot more data.

Just wait until the jepsen report on /dev/null. It's going to be brutal.

/dev/null works according to spec, can't accuse it of not doing something it has never promised

If you are looking for a serverless alternative to JetStream, check out https://s2.dev

Pros: unlimited streams with the durability of object storage – JetStream can only do a few K topics

Cons: no consumer groups yet, it's on the agenda


Have you tried running Jepsen against it?

We do deterministic simulation testing

https://s2.dev/blog/dst https://s2.dev/blog/linearizability

We have also adopted Antithesis for a more thorough DST environment, and plan to do more with it.

One day we will engage Kyle to Jepsen, too. I'm not sure when though.


> By default, NATS only flushes data to disk every two minutes, but acknowledges operations immediately. This approach can lead to the loss of committed writes when several nodes experience a power failure, kernel crash, or hardware fault concurrently—or in rapid succession (#7564).

I am getting strong early MongoDB vibes. "Look how fast it is, it's web-scale!". Well, if you don't fsync, you'll go fast, but you'll go even faster piping customer data to /dev/null, too.

Coordinated failures shouldn't be a novelty or a surprise any longer these days.

I wouldn't trust a product that doesn't default to safest options. It's fine to provide relaxed modes of consistency and durability but just don't make them default. Let the user configure those themselves.


NATS is very upfront in that the only thing that is guaranteed is the cluster being up.

I like that, and it allows me to build things around it.

For us when we used it back in 2018, it performed well and was easy to administer. The multi-language APIs were also good.


> NATS is very upfront in that the only thing that is guaranteed is the cluster being up.

Not so fast.

Their docs makes some pretty bold claims about JetStream....

They talk about JetStream addressing the "fragility" of other streaming technology.

And "This functionality enables a different quality of service for your NATS messages, and enables fault-tolerant and high-availability configurations."

And one of their big selling-points for JetStream is the whole "stora and replay" thing. Which implies the storage bit should be trustworthy, no ?


oh sorry I was talking about NATS core. not jetstream. I'd be pretty sceptical about persistence

the OP was specifically about jetstream so i guess you just didn't read it?

just imagine I'm claude,

smoke bomb


I don't think there is a modern database that have the safest options all turned on by default. For instance the default transaction model for PG is read commited not serializable

One of the most used DB in the world is Redis, and by default they fsync every seconds not every operations.


Pretty sure SQL Server won't acknowledge a write until its in the WAL (you can go the opposite way and turn on delayed durability though.)

I don't know about Jetstream, but redis cluster would only ack writes after replicating to a majority of nodes. I think there is some config on standalone redis too where you can ack after fsync (which apparently still doesn't guarantee anything because of buffering in the OS). In any case, understanding what the ack implies is important, and I'd be frustrated if jetstream docs were not clear on that.

To the best of my knowledge, Redis has never blocked for replication, although you can configure healthy replication state as a prerequisite to accept writes.

Not flushing on every write is a very common tradeoff of speed over durability. Filesystems, databases, all kinds of systems do this. They have some hacks to prevent it from corrupting the entire dataset, but lost writes are accepted. You can often prevent this by enabling an option or tuning a parameter.

> I wouldn't trust a product that doesn't default to safest options

This would make most products suck, and require a crap-ton of manual fixes and tuning that most people would hate, if they even got the tuning right. You have to actually do some work yourself to make a system behave the way you require.

For example, Postgres' isolation level is weak by default, leading to race conditions. You have to explicitly enable serialization to avoid it, which is a performance penalty. (https://martin.kleppmann.com/2014/11/25/hermitage-testing-th...)


> Filesystems, databases, all kinds of systems do this. They have some hacks to prevent it from corrupting the entire dataset, but lost writes are accepted.

Woah, those are _really_ strong claims. "Lost writes are accepted"? Assuming we are talking about "acknowledged writes", which the article is discussing, I don't think it's true that this is a common default for databases and filesystems. Perhaps databases or K/V stores that are marketed as in-memory caches might have defaults like this, but I'm not familiar with other systems that do.

I'm also getting MongoDB vibes from deciding not to flush except once every two minutes. Even deciding to wait a second would be pretty long, but two minutes? A lot happens in a busy system in 120 seconds...


All filesystems that I'm aware of don't sync to disk on every write by default, and you absolutely can lose data. You have to intentionally enable sync. And even then the disk can still lose the writes.

Most (all?) NoSQL solutions are also eventual-consistency by default which means they can lose data. That's how Mongo works. It syncs a journal every 30-100 ms, and it syncs full writes at a configurable delay. Mongo is terrible, but not because it behaves like a filesystem.

Note that this is not "bad", it's just different. Lots of people use these systems specifically because they need performance more than durability. There are other systems you can use if you need those guarantees.


I think “most people will have to turn on the setting to make things fast at the expense of durability” is a dubious assertion (plenty of system, even high-criticality ones, do not have a very high data rate and thus would not necessarily suffer unduly from e.g. fsync-every-write).

Even if most users do turn out to want “fast_and_dangerous = true”, that’s not a particularly onerous burden to place on users: flip one setting, and hopefully learn from the setting name or the documentation consulted when learning about it that it poses operational risk.


In the defense of PG, for better or worse as far as I know, the 'what is RDBMS default' falls into two categories;

- Read Committed default with MVCC (Oracle, Postgres, Firebird versions with MVCC, I -think- SQLite with WAL falls under this)

- Read committed with write locks one way or another (MSSQL default, SQLite default, Firebird pre MVCC, probably Sybase given MSSQL's lineage...)

I'm not aware of any RDBMS that treats 'serializable' as the default transaction level OOTB (I'd love to learn though!)

....

All of that said, 'Inconsistent read because you don't know RDBMS and did not pay attention to the transaction model' has a very different blame direction than 'We YOLO fsync on a timer to improve throughput'.

If anything it scares me that there's no other tuning options involved such as number of bytes or number of events.

If I get a write-ack from a middleware I expect it to be written one way or another. Not 'It is written within X seconds'.

AFAIK there's no RDBMS that will just 'lose a write' unless the disk happens to be corrupted (or, IDK, maybe someone YOLOing with chaos mode on DB2?)


> I -think- SQLite with WAL falls under this

No. SQLite is serializable. There's no configuration where you'd get read committed or repeatable read.

In WAL mode you may read stale data (depending on how you define stale data), but if you try to write in a transaction that has read stale data, you get a conflict, and need to restart your transaction.

There's one obscure configuration no one uses that's read uncommitted. But really, no one uses it.


CockroachDB does Serializable by default

> Well, if you don't fsync, you'll go fast, but you'll go even faster piping customer data to /dev/null, too.

The trouble is that you need to specifically optimize for fsyncs, because usually it is either no brakes or hand-brake.

The middle-ground of multi-transaction group-commit fsync seems to not exist anymore because of SSDs and massive IOPS you can pull off in general, but now it is about syscall context switches.

Two minutes is a bit too too much (also fdatasync vs fsync).


IOPS only solves throughput, not latency. You still need to saturate internal parallelism to get good throughput from SSDs, and that requires batching. Also, even double-digit microsecond write latency per transaction commit would limit you to only 10K TPS. It's just not feasible to issue individual synchronous writes for every transaction commit, even on NVMe.

tl;dr "multi-transaction group-commit fsync" is alive and well


NATS data is ephemeral in many cases anyhow, so it makes a bit more sense here. If you wanted something fully durable with a stronger persistence story you'd probably use Kafka anyhow.

Core nats is ephemeral. Jetstream is meant to be persisted, and presented as a replacement for kafka

> NATS data is ephemeral in many cases anyhow, so it makes a bit more sense here

Dude ... the guy was testing JetStream.

Which, I quote from the first phrase from the first paragraph on the NATS website:

    NATS has a built-in persistence engine called JetStream which enables messages to be stored and replayed at a later time.

So is MQTT, why bother with NATS then?

MQTT doesn't have the same semantics. https://docs.nats.io/nats-concepts/core-nats/reqreply request reply is really useful if you need low latency, but reasonably efficient queuing. (making sure to mark your workers as busy when processing otherwise you get latency spikes. )

You can do request/reply with MQTT too, you just have to implement more bits yourself, whilst NATS has a nice API that abstracts that away for you.

oh indeed, and clusters nicely.

Half-expected tbh, but didn’t expect to be this bad.

Just use redpanda.


> > You can force an fsync after each messsage [sic] with always, this will slow down the throughput to a few hundred msg/s.

Is the performance warning in the NATS possible to improve on? Couldn't you still run fsync on an interval and queue up a certain number of writes to be flushed at once? I could imagine latency suffering, but batches throughput could be preserved to some extent?


> Is the performance warning in the NATS possible to improve on? Couldn't you still run fsync on an interval and queue up a certain number of writes to be flushed at once? I could imagine latency suffering, but batches throughput could be preserved to some extent?

Yes, and you shouldn't even need a fixed interval. Just queue up any writes while an `fsync` is pending; then do all those in the next batch. This is the same approach you'd use for rounds of Paxos, particularly between availability zones or regions where latency is expected to be high. You wouldn't say "oh, I'll ack and then put it in the next round of Paxos", or "I'll wait until the next round in 2 seconds then ack"; you'd start the next batch as soon as the current one is done.


NATS is a fantastic piece of software. But doc’s unpractical and half backed. That’s a shame to be required to retro engineer the software from GitHub to know the auth schemes.

[flagged]


"PostgreSQL used fsync incorrectly for 20 years"

https://archive.fosdem.org/2019/schedule/event/postgresql_fs...

It did not prevent people from using it. You won't find a database that has the perfect durability, ease of use, performance ect.. It's all about tradeoffs.


Realistically speaking, postgresql wasn’t handling a failed call to fsync, which is wrong: but materially different from a bad design or errors in logic stemming from many areas.

Postgresql was able to fix their bug in 3 lines of code, how many for the parent system?

I understand your core thesis (sometimes durability guarantees aren’t as needed as we think) but in postgresql’s case, the edge was incredibly thin. It would have had to have been: a failed call to fsync and a system level failure of the host before another call to fsync (which are reasonably common).

It’s far too apples to oranges to be meaningful to bring up I am afraid.


NATS allows you to fsync every calls, it's not just the default value.

NATS was originally made for simple, fast, ephemeral messaging.

The persistence stuff is kinda new and it's not a surprise that there are limitations and bugs.

You should see this report as a good thing, as it will add pressure for improvements.


> The persistence stuff is kinda new and it's not a surprise that there are limitations and bugs.

It's not really that new. The precursor to JetStream was NATS Streaming Server [1], which was first tagged almost 10 years ago [2].

[1] https://github.com/nats-io/nats-streaming-server

[2] https://github.com/nats-io/nats-streaming-server/releases/ta...


do you have a better solution?

as they would say, NATS is a terrible message bus system, but all the others are worse


Pulsar can do most of what NATS can, but at a much higher cost in both compute and operations (though I haven’t seen a head-to-head of each with durability turned on), along with some simply different characteristics (like NATS being suitable for sidecar deployment). NATS is fantastic for ephemeral messaging, but some of this report is really concerning when JetStream has been shipping for years.

Are RabbitMQ's durable queues worse?

Interested to know if you found these issues yourself or from a source. Is Kafka any more robust?


This is just a tl;dr of the article with a mean-spirited barb added.

NATS is ephemeral. if you can accept that, then you'll be fine.


nats jetstream vs say redis streams - which one have people found easier to work with ?

When I worked with bounded Redis streams a couple of years ago we had to implement our own backpressure mechanism which was quite tricky to get right.

To implement backpressure without relying on out of band signals (distributed systems beware) you need to have a deep understanding of the entire redis streams architecture and how the the pending entries list, consumers groups, consumers etc. works and interacts to not lose data by overwriting yourself.

Unbounded would have been fine if we could spill to disk and periodically clean up the data, but this is redis.

Not sure if that has improved.


Thanks, those reports are always a quiet pleasure to read even if one is a bit far from the domain.

Definitely thought this was about aviation for a moment.

Yea! I did a double-take, as in addition to Jeppesen, NATS is something I worked with in the past as a UK NOTAM service.

Likewise. It took me a moment to realise Jepsen!== Jeppesen

And NATS being the North Atlantic tracks.

It's named after Carly Rae Jepsen, of 2012 hit single "Call Me Maybe" fame.

I think Aphyr will insist it isn't actually named after Carly Rae for legal reasons, just a striking coincidence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: