That is insane. I'm surprised it can even go that low, I would have expected it to hit an IO bottleneck before then (large object files/linking/source reads, etc).
Even with a fast SSD on my lowly i7, I often wind up sitting at IO or lock contention instead of actual CPU bottlenecks (although it could be argued faster CPU = faster lock release = faster compilation).
I've done builds in /dev/shm/ on Xeon and Threadripper with only a trivial speed-up. If it can fit in tempfs, make/cc can just load it all into RAM anyway, so I guess you only reduce the build time by the time it takes for the first read. Which would explain why '-j' on a big codebase tends to trigger my OOM killer.
Technically, 100%. I'm full SSD right now though which has a seriously noticeable difference from from HDD but for what I do most days NVME isn't justifiable. I see others who can take advantage of the speeds and do so with huge returns.
I do have 2 super SFF HP boxes that only take NVME in the M.2 drive so have one on hand but it isn't installed at the moment.
That's just four SSDs mounted on one riser card with a fan. If you're going to count the aggregate bandwidth of an array, then the question's almost meaningless.
Even with a fast SSD on my lowly i7, I often wind up sitting at IO or lock contention instead of actual CPU bottlenecks (although it could be argued faster CPU = faster lock release = faster compilation).