I'd push back on some of this. Specifically, the memory management that is somewhat inherent to how a CGI script works is typically easier to manage than longer life cycle things. You just tear down the entire process; instead of having to carefully tear down each thing created during the process.
Sure, it is easy to view this as the process being somewhat sloppy with regards to how it did memory. But it can also be seen as just less work. If you can toss the entire allocated range of memory, what benefit is there to carefully walking back each allocated structure? (Notably, arenas and such are efforts to get this kind of behavior in longer lived processes.)
True, it is simpler to just simply never free memory and let process teardown take care of it, but I'm only disagreeing with the notion that it's non-trivial to write servers that simply don't leak memory per-request. I think with modern tools, it's pretty easy for anyone to accomplish. Hell, if you can just slap Boehm GC into your C program, maybe it's trivial to accomplish with old tools, too.
Fair. My push was less on just not leaking memory entirely, and more that it can scale faster. Both using a GC and relying on teardown are essentially punting the problem from the specific request handling code onto something else. It was not uncommon to see GC based systems fall behind under load. Specifically because their task was more work than tearing down a process.
Sure, it is easy to view this as the process being somewhat sloppy with regards to how it did memory. But it can also be seen as just less work. If you can toss the entire allocated range of memory, what benefit is there to carefully walking back each allocated structure? (Notably, arenas and such are efforts to get this kind of behavior in longer lived processes.)