Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Do you mean a... computer? Server is a software term. It is a process that listens for network requests.

Between this and the guy arguing that UNIX doesn't have "folders" I can see that these kinds of threads bring out the most insane possible lines of rhetoric. Are you sincerely telling me right now you've never seen the term "server" used to refer to computers that run servers? Jesus Christ.

Pedantry isn't a contest, and I'm not trying to win it. I'm not sitting here saying that "Serverless is not a marketing term for CGI" to pull some epic "well, actually..." I'm saying it because God damnit, it's true. Serverless was a term invented specifically by providers of computers-that-aren't-yours to give people options to not need to manage the computers-that-aren't-yours. They actually use this term serverless for many things, again including databases, where you don't even write an application or a server in the first place; we're just using "serverless" as a synonym for "serverless function", which I am fine to do, but pointing that out is important for more than just pedantry reasons because it helps extinguish the idea that "serverless" was ever meant to have anything to do with application design. It isn't and doesn't. Serverless is not a marketing term for CGI. Not even in a relaxed way, it's just not. The selling point of Serverless functions is "you give us your request handler and we'll handle running it and scaling it up".

This has nothing to do with the rise of embedding a server into your application.



> Serverless is not a marketing term for CGI. Not even in a relaxed way, it's just not. The selling point of Serverless functions is "you give us your request handler and we'll handle running it and scaling it up".

That was the selling point of CGI hosting though. Except that the "scaling it up" part was pretty rare. There were server farms that ran CGI scripts (NCSA had a six-server cluster with round-robin DNS when they first published a paper describing how they did it, maybe 01994) but the majority of CGI scripts were almost certainly on single-server hosting platforms.


With NCSA HTTPd I'm pretty sure it was literally the only way to do dynamic things at least initially. Which makes sense for the time period, I mean it's the same basic idea as inetd but for HTTP and with some differing implementation details.

Is the selling point of shared hosting and "serverless" PaaS platforms similar? To an extent it definitely is, but I think another major selling point of shared hosting was the price. For a really long time it was the only economically sane option, and even when cheap low end VPS options (usually OpenVZ-based) emerged, they were usually not as good for a lot of workloads as a similarly priced shared hosting option.

But at that point, we're basically debating whether or not the term "serverless" has merit, and that's not an argument I plan to make. I'm only trying to make the argument that serverless is about the actual abstraction of traditional server machines. Shared hosting is just about having someone else do it for you. These are similar, but different.


I agree that it's very much like inetd. Or the Unix shell, which launches one or more processes for each user command.

But, no, you could very easily edit the httpd source to do the dynamic things and recompile it. As an example of what you could do, stock NCSA httpd supported "server-side includes" very early on, for example, definitely in 01994, maybe in 01993. The big advantage of CGI was that it decoupled the management of the server as a whole from particular gateway programs. It didn't take all that long for people to start writing their gateways in languages that weren't C, of course, and that was a different benefit of CGI. (If you were running Plexus instead, you could hack Perl dynamic things into your server source code.) And running the CGI (or SSI) as the user who owned the file instead of as the web server came years later.

By "abstraction of traditional server machines" do you mean "load balancing"? Like, so that your web service can scale up to handle larger loads, and doesn't become unavailable when a server fails, and your code has access to the same data no matter which machine it happens to get run on? Because, as I explained above, NCSA (the site, not NCSA httpd at other sites) was doing that in the mid-90s. Is there some other way that AWS Lambda "abstracts" the servers from the point of view of Lambda customers?

With respect to the price, I guess I always sort of assumed that the main reason you'd go with "serverless" offerings rather than an EC2 VPS or equivalent was the price, too. But certainly not having to spend any time configuring and monitoring servers is an upside of CGI and Lambda and Cloud Run and whatever other "serverless" platforms there are out there.


> Serverless was a term invented specifically by providers of computers-that-aren't-yours to give people options to not need to manage the computers-that-aren't-yours.

No. "Cloud" was the term invented for that, inherited from networking diagrams where it was common to represent the bits you don't manage as cloud figures. Usage of "Serverless" emerged from AWS Lamba, which was designed to have an execution model much like CGI. "Severless" refers to your application being less a server. Lamba may not use CGI specifically, but the general idea is very much the same.


Okay. Let's ask Amazon since they invented the term:

> Serverless computing is an application development model where you can build and deploy applications on third-party managed server infrastructure. All applications require servers to run. But in the serverless model, a cloud provider manages the routine work; they provision, scale, and maintain the underlying infrastructure. The cloud provider handles several tasks, such as operating system management, security patches, file system and capacity management, load balancing, monitoring, and logging. As a result, your developers can focus on application design and still receive the benefits of cost-effective, efficient, and massively scalable server infrastructure.

Right. And that makes sense. Because again, what we're talking about when we're talking about AWS Lambda is serverless functions. But AWS also uses the term for other things that are "serverless", again, like Aurora Serverless. Aurora Serverless is basically the same idea: the infrastructure is abstracted, except for a database. This effectively means the database can transparently scale from 0 to whatever the maximum instance sizes Amazon supports without a human managing database instances.

That's also the same idea for serverless functions. It's not about whether your application has a "server" in it.


> Serverless computing is an application development model where you can build and deploy applications on third-party managed server infrastructure. All applications require servers to run. But in the serverless model, a cloud provider manages the routine work; they provision, scale, and maintain the underlying infrastructure. The cloud provider handles several tasks, such as operating system management, security patches, file system and capacity management, load balancing, monitoring, and logging. As a result, your developers can focus on application design and still receive the benefits of cost-effective, efficient, and massively scalable server infrastructure.

The only word of this that is not is a description of old-fashioned shared CGI hosting is "massively scalable". (And maybe "efficient".)


> Serverless computing is an application development model

Exactly. And how that development model differs from the traditional approach is that you don't have to implement a server. Deployment isn't a development model. The development is necessarily done by the time you get there.

> But AWS also uses the term for other things

The terms has expended to be used for all kinds of different things, sure. There is probably a toaster out there somewhere solid as being "Serverless" nowadays.

If we really want to get into the thick of it, "serverless" seems to go back much further, used to refer to certain P2P systems. But we know from context that isn't what we're talking about. Given the context, it is clear we are talking about "serverless" as it emerged out of Lamba, referring to systems that were CGI-esq in nature.


It's funny how you added that part even though Amazon's own description continues in a completely different way that doesn't emphasize this at all. That's not a mistake on Amazon's part; it's not that they forgot to mention it. The reason why it's not there is because it's not actually the point.

You're reading "application development model" and thinking "Exactly! It's all about the request handling model!" but that's not what Amazon said or meant. Consider the description of Amazon Fargate, a service that in fact can be used to run regular old web servers:

> AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers.

I guess the next argument is that Amazon is just diluting the term and originally it meant what you think it meant, and that is the terminal state of this debate since there is no more productive things to say.

Edit: you added more but it's just more attempting to justify away things that are plainly evident... But I can't help myself. This is just non-sense:

> Deployment isn't a development model,

Software development is not just writing code.


> Software development is not just writing code.

But it remains that deployment is normally considered to be independent of development. If you put your binaries on a CD instead of sending it to AWS, the application will still be considered developed by most people. Deployment is a post-development activity.

> I guess the next argument is that Amazon is just diluting the term

Could be. Would it matter? The specific definition you offer didn't even emerge until ~2023, nearly a decade after Lamba was introduced, so clearly they're not hung up on some kind of definitional purity. Services like Cloud Run figured out that you could keep the server in the application, while still exhibiting the spirit of CGI, so it is not like it is hard technical requirement, but it is the technical solution that originally emerged and was named as such.

If what you are trying to say, and not conveying it well, is that it has become a marketing term for all kinds of different things, you're not wrong. Like I suggested in another comment, there is probably a "Serverless" toaster for sale out there somewhere these days.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: