Algolia has stunning latency and I assume a bucketload of traffic, I suspect they just have very competent infrastructure and fast as hell code and queries, perhaps thr same is true here.
I appreciate that this is a glass half full or empty kind of situation, but I tend to see "fast" setups actually simply not doing a bunch of the mostly unnecessary stuff that other solutions are doing. This is especially true for code, where of course we're going to get bad results:
We've over-abstracted everything with our own abstractions and/or libraries that are as bad as our own stuff and we're doing this in a language and runtime that is pretty much one big premature pessimisation to begin with because we could do significantly more with less. We have no idea what the GC is doing and when, and we care less about that than "using functional programming" or some other equally pointless-in-itself principle that you would be better taking very little from.
Yeah this is definitely true for Marginalia. It does exactly what it needs to do to serve the page and very little else. There's also no superfluous scripts in the frontend, no session cookie, no user tracking. That stuff does add up.
Many applications do so many redundant calculations it's almost absurd. Professionally I've seen applications do enormous ORM lookups that fetch thousands of objects, stick them into a hash table by some key, and pick one value and compare against some parameter; and then do this operation in four different places in processing a single request. Gee I wonder why we have 800ms page loads...