Or think about it in a different way - instead of adding disk IO on the server itself, you're offloading the log processing to another server which does delay writes (you don't usually need immediate sync for remote logging) and gives you better log processing capabilities (semi-structured data).
If your workload cannot be handled this way - that's another thing. But how did we get from "mongo is webscale" to "mongo cannot be used for anything at all"? What happened to benchmarking and taking serious decisions backed by real data?
For write-only logging from stateless, single machine bound processes - yes. For analytics, automated tracking stateful sessions across many nodes, preserving context, dumping binary fragments... no, at least for me it did not always work.
You can reconstruct almost any system flow with good logging. While that's not always ideal (especially if you need to query the data), the more structured your data gets, the less it is a simple log. When you increase the specificity of your tools, they becomes less useful in the general case, turing tarpit notwithstanding.
If your workload cannot be handled this way - that's another thing. But how did we get from "mongo is webscale" to "mongo cannot be used for anything at all"? What happened to benchmarking and taking serious decisions backed by real data?