With NCSA HTTPd I'm pretty sure it was literally the only way to do dynamic things at least initially. Which makes sense for the time period, I mean it's the same basic idea as inetd but for HTTP and with some differing implementation details.
Is the selling point of shared hosting and "serverless" PaaS platforms similar? To an extent it definitely is, but I think another major selling point of shared hosting was the price. For a really long time it was the only economically sane option, and even when cheap low end VPS options (usually OpenVZ-based) emerged, they were usually not as good for a lot of workloads as a similarly priced shared hosting option.
But at that point, we're basically debating whether or not the term "serverless" has merit, and that's not an argument I plan to make. I'm only trying to make the argument that serverless is about the actual abstraction of traditional server machines. Shared hosting is just about having someone else do it for you. These are similar, but different.
I agree that it's very much like inetd. Or the Unix shell, which launches one or more processes for each user command.
But, no, you could very easily edit the httpd source to do the dynamic things and recompile it. As an example of what you could do, stock NCSA httpd supported "server-side includes" very early on, for example, definitely in 01994, maybe in 01993. The big advantage of CGI was that it decoupled the management of the server as a whole from particular gateway programs. It didn't take all that long for people to start writing their gateways in languages that weren't C, of course, and that was a different benefit of CGI. (If you were running Plexus instead, you could hack Perl dynamic things into your server source code.) And running the CGI (or SSI) as the user who owned the file instead of as the web server came years later.
By "abstraction of traditional server machines" do you mean "load balancing"? Like, so that your web service can scale up to handle larger loads, and doesn't become unavailable when a server fails, and your code has access to the same data no matter which machine it happens to get run on? Because, as I explained above, NCSA (the site, not NCSA httpd at other sites) was doing that in the mid-90s. Is there some other way that AWS Lambda "abstracts" the servers from the point of view of Lambda customers?
With respect to the price, I guess I always sort of assumed that the main reason you'd go with "serverless" offerings rather than an EC2 VPS or equivalent was the price, too. But certainly not having to spend any time configuring and monitoring servers is an upside of CGI and Lambda and Cloud Run and whatever other "serverless" platforms there are out there.
Is the selling point of shared hosting and "serverless" PaaS platforms similar? To an extent it definitely is, but I think another major selling point of shared hosting was the price. For a really long time it was the only economically sane option, and even when cheap low end VPS options (usually OpenVZ-based) emerged, they were usually not as good for a lot of workloads as a similarly priced shared hosting option.
But at that point, we're basically debating whether or not the term "serverless" has merit, and that's not an argument I plan to make. I'm only trying to make the argument that serverless is about the actual abstraction of traditional server machines. Shared hosting is just about having someone else do it for you. These are similar, but different.