Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Serverless is a marketing term for CGI, and you can observe that serverless is very popular.

No, it's not.

CGI is Common Gateway Interface, a specific technology and protocol implemented by web servers and applications/scripts. The fact that you do a fork+exec for each request is part of the implementation.

"Serverless" is a marketing term for a fully managed offering where you give a PaaS some executable code and it executes it per-request for you in isolation. What it does per request is not defined since there is no standard and everything is fully managed. Usually, rather than processes, serverless platforms usually operate on the level of containers or micro VMs, and can "pre-warm" them to try to eliminate latency, but obviously in case of serverless the user gets a programming model and not a protocol. (It could obviously be CGI under the hood, but when none of the major platforms actually do that, how fair is it to call serverless a "marketing term for CGI"?)

CGI and serverless are only similar in exactly one way: your application is written "as-if" the process is spawned each time there is a request. Beyond that, they are entirely unrelated.

> A couple of years ago my (now) wife and I wrote a single-event Evite clone for our wedding invitations, using Django and SQLite. We used FastCGI to hook it up to the nginx on the server. When we pushed changes, we had to not just run the migrations (if any) but also remember to restart the FastCGI server, or we would waste time debugging why the problem we'd just fixed wasn't fixed. I forget what was supposed to start the FastCGI process, but it's not running now. I wish we'd used CGI, because it's not working right now, so I can't go back and check the wedding invitations until I can relogin to the server. I know that password is around here somewhere...

> A VPS would barely have simplified any of these problems, and would have added other things to worry about keeping patched. Our wedding invitation RSVP did need its own database, but it didn't need its own IPv4 address or its own installation of Alpine Linux.

> It probably handled less than 1000 total requests over the months that we were using it, so, no, it was not significantly better to not fork+exec for each page load.

> You say "outdated", I say "boring". Boring is good. There's no need to make things more complicated and fragile than they need to be, certainly not in order to save 500 milliseconds of CPU time over months.

To be completely honest with you, I actually agree with your conclusion in this case. CGI would've been better than Django/FastCGI/etc.

Hell, I'd go as far as to say that in that specific case a simple PHP-FPM setup seems like it would've been more than sufficient. Of course, that's FastCGI, but it has the programming model that you get with CGI for the most part.

But that's kind of the thing. I'm saying "why would you want to fork+exec 5000 times per second" and you're saying "why do I care about fork+exec'ing 1000 times in the total lifespan of my application". I don't think we're disagreeing in the way that you think we are disagreeing...



> No, it's not.

It is not strictly limited to the CGI protocol, of course, but it is the marketing term for the concept of the application not acting as the server, of which CGI applications would be included. CGI, like all serverless applications, outsource the another process, such as Apache or nginx, to provide the server. Hence the literal name.

> "Serverless" is a marketing term for a fully managed offering where you give a PaaS

Fully managed offerings are most likely to be doing the marketing, so it is understandable how you might reach that conclusion, but the term is being used to sell to developers. It communicates to them, quite literally, that they don't have to make their application a server, which has been the style for networked applications for a long time now. But if you were writing a CGI application to run on your own systems, it would also be serverless.


The term "serverless" is a generic PaaS marketing term to refer to managed services where you don't have to manage a server to use them, e.g. "Amazon Aurora Serverless". If you're managing CGI scripts on a traditional server, you're still managing a server.

The point isn't really that the application is unaware of the server, it's that the server is entirely abstracted away from you. CGI vs serverless is apples vs oranges.

> [...] but the term is being used to sell to developers. It communicates to them, quite literally, that they don't have to make their application a server [...]

I don't agree. It is being sold to businesses, that they don't have to manage a server. The point is that you're paying someone else to be the sysadmin and getting all of the details abstracted away from you. Appealing to developers by making their lives easier is definitely a perk, but that's not why the term "serverless" exists. Before PaaSes I don't think I've ever seen anyone once call CGI "serverless".


> It is being sold to businesses, that they don't have to manage a server.

Do you mean a... computer? Server is a software term. It is a process that listens for network requests.

At least since CGI went out of fashion, embedding a server right in your application has been the style. Serverless sees a return to the application being less a server, pushing the networking bits somewhere else. Modern solutions may not use CGI specifically, but the idea is the same.

If you did mistakenly type "server" when you really meant "computer", PaaS offerings already removed the need for businesses to manage computers long before serverless came around. "Serverless" appeared specifically in reference to the CGI-style execution model, it being the literal description of what it is.


> Do you mean a... computer? Server is a software term. It is a process that listens for network requests.

Between this and the guy arguing that UNIX doesn't have "folders" I can see that these kinds of threads bring out the most insane possible lines of rhetoric. Are you sincerely telling me right now you've never seen the term "server" used to refer to computers that run servers? Jesus Christ.

Pedantry isn't a contest, and I'm not trying to win it. I'm not sitting here saying that "Serverless is not a marketing term for CGI" to pull some epic "well, actually..." I'm saying it because God damnit, it's true. Serverless was a term invented specifically by providers of computers-that-aren't-yours to give people options to not need to manage the computers-that-aren't-yours. They actually use this term serverless for many things, again including databases, where you don't even write an application or a server in the first place; we're just using "serverless" as a synonym for "serverless function", which I am fine to do, but pointing that out is important for more than just pedantry reasons because it helps extinguish the idea that "serverless" was ever meant to have anything to do with application design. It isn't and doesn't. Serverless is not a marketing term for CGI. Not even in a relaxed way, it's just not. The selling point of Serverless functions is "you give us your request handler and we'll handle running it and scaling it up".

This has nothing to do with the rise of embedding a server into your application.


> Serverless is not a marketing term for CGI. Not even in a relaxed way, it's just not. The selling point of Serverless functions is "you give us your request handler and we'll handle running it and scaling it up".

That was the selling point of CGI hosting though. Except that the "scaling it up" part was pretty rare. There were server farms that ran CGI scripts (NCSA had a six-server cluster with round-robin DNS when they first published a paper describing how they did it, maybe 01994) but the majority of CGI scripts were almost certainly on single-server hosting platforms.


With NCSA HTTPd I'm pretty sure it was literally the only way to do dynamic things at least initially. Which makes sense for the time period, I mean it's the same basic idea as inetd but for HTTP and with some differing implementation details.

Is the selling point of shared hosting and "serverless" PaaS platforms similar? To an extent it definitely is, but I think another major selling point of shared hosting was the price. For a really long time it was the only economically sane option, and even when cheap low end VPS options (usually OpenVZ-based) emerged, they were usually not as good for a lot of workloads as a similarly priced shared hosting option.

But at that point, we're basically debating whether or not the term "serverless" has merit, and that's not an argument I plan to make. I'm only trying to make the argument that serverless is about the actual abstraction of traditional server machines. Shared hosting is just about having someone else do it for you. These are similar, but different.


I agree that it's very much like inetd. Or the Unix shell, which launches one or more processes for each user command.

But, no, you could very easily edit the httpd source to do the dynamic things and recompile it. As an example of what you could do, stock NCSA httpd supported "server-side includes" very early on, for example, definitely in 01994, maybe in 01993. The big advantage of CGI was that it decoupled the management of the server as a whole from particular gateway programs. It didn't take all that long for people to start writing their gateways in languages that weren't C, of course, and that was a different benefit of CGI. (If you were running Plexus instead, you could hack Perl dynamic things into your server source code.) And running the CGI (or SSI) as the user who owned the file instead of as the web server came years later.

By "abstraction of traditional server machines" do you mean "load balancing"? Like, so that your web service can scale up to handle larger loads, and doesn't become unavailable when a server fails, and your code has access to the same data no matter which machine it happens to get run on? Because, as I explained above, NCSA (the site, not NCSA httpd at other sites) was doing that in the mid-90s. Is there some other way that AWS Lambda "abstracts" the servers from the point of view of Lambda customers?

With respect to the price, I guess I always sort of assumed that the main reason you'd go with "serverless" offerings rather than an EC2 VPS or equivalent was the price, too. But certainly not having to spend any time configuring and monitoring servers is an upside of CGI and Lambda and Cloud Run and whatever other "serverless" platforms there are out there.


> Serverless was a term invented specifically by providers of computers-that-aren't-yours to give people options to not need to manage the computers-that-aren't-yours.

No. "Cloud" was the term invented for that, inherited from networking diagrams where it was common to represent the bits you don't manage as cloud figures. Usage of "Serverless" emerged from AWS Lamba, which was designed to have an execution model much like CGI. "Severless" refers to your application being less a server. Lamba may not use CGI specifically, but the general idea is very much the same.


Okay. Let's ask Amazon since they invented the term:

> Serverless computing is an application development model where you can build and deploy applications on third-party managed server infrastructure. All applications require servers to run. But in the serverless model, a cloud provider manages the routine work; they provision, scale, and maintain the underlying infrastructure. The cloud provider handles several tasks, such as operating system management, security patches, file system and capacity management, load balancing, monitoring, and logging. As a result, your developers can focus on application design and still receive the benefits of cost-effective, efficient, and massively scalable server infrastructure.

Right. And that makes sense. Because again, what we're talking about when we're talking about AWS Lambda is serverless functions. But AWS also uses the term for other things that are "serverless", again, like Aurora Serverless. Aurora Serverless is basically the same idea: the infrastructure is abstracted, except for a database. This effectively means the database can transparently scale from 0 to whatever the maximum instance sizes Amazon supports without a human managing database instances.

That's also the same idea for serverless functions. It's not about whether your application has a "server" in it.


> Serverless computing is an application development model where you can build and deploy applications on third-party managed server infrastructure. All applications require servers to run. But in the serverless model, a cloud provider manages the routine work; they provision, scale, and maintain the underlying infrastructure. The cloud provider handles several tasks, such as operating system management, security patches, file system and capacity management, load balancing, monitoring, and logging. As a result, your developers can focus on application design and still receive the benefits of cost-effective, efficient, and massively scalable server infrastructure.

The only word of this that is not is a description of old-fashioned shared CGI hosting is "massively scalable". (And maybe "efficient".)


> Serverless computing is an application development model

Exactly. And how that development model differs from the traditional approach is that you don't have to implement a server. Deployment isn't a development model. The development is necessarily done by the time you get there.

> But AWS also uses the term for other things

The terms has expended to be used for all kinds of different things, sure. There is probably a toaster out there somewhere solid as being "Serverless" nowadays.

If we really want to get into the thick of it, "serverless" seems to go back much further, used to refer to certain P2P systems. But we know from context that isn't what we're talking about. Given the context, it is clear we are talking about "serverless" as it emerged out of Lamba, referring to systems that were CGI-esq in nature.


It's funny how you added that part even though Amazon's own description continues in a completely different way that doesn't emphasize this at all. That's not a mistake on Amazon's part; it's not that they forgot to mention it. The reason why it's not there is because it's not actually the point.

You're reading "application development model" and thinking "Exactly! It's all about the request handling model!" but that's not what Amazon said or meant. Consider the description of Amazon Fargate, a service that in fact can be used to run regular old web servers:

> AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers.

I guess the next argument is that Amazon is just diluting the term and originally it meant what you think it meant, and that is the terminal state of this debate since there is no more productive things to say.

Edit: you added more but it's just more attempting to justify away things that are plainly evident... But I can't help myself. This is just non-sense:

> Deployment isn't a development model,

Software development is not just writing code.


> Software development is not just writing code.

But it remains that deployment is normally considered to be independent of development. If you put your binaries on a CD instead of sending it to AWS, the application will still be considered developed by most people. Deployment is a post-development activity.

> I guess the next argument is that Amazon is just diluting the term

Could be. Would it matter? The specific definition you offer didn't even emerge until ~2023, nearly a decade after Lamba was introduced, so clearly they're not hung up on some kind of definitional purity. Services like Cloud Run figured out that you could keep the server in the application, while still exhibiting the spirit of CGI, so it is not like it is hard technical requirement, but it is the technical solution that originally emerged and was named as such.

If what you are trying to say, and not conveying it well, is that it has become a marketing term for all kinds of different things, you're not wrong. Like I suggested in another comment, there is probably a "Serverless" toaster for sale out there somewhere these days.


> If you're managing CGI scripts on a traditional server, you're still managing a server.

Usually somebody else is managing the server, or servers, so you don't have to think about it. That's been how it's worked for 30 years.

> Before PaaSes I don't think I've ever seen anyone once call CGI "serverless".

No, because "serverless" was a marketing term invented to sell PaaSes because they thought that it would sell better than something like "CloudCGI" (as in FastCGI or SpeedyCGI, which also don't use the CGI protocol). But CGI hosting fits cleanly within the roomy confines of the term.


Oh my god! This could go on forever.

Having a guy named Steve manage your servers is not "serverless" by my definition, because it's not about you personally having to manage the server, it's about anyone personally having to manage it. AWS Lambda is managed by Amazon as a singular giant computer spawning micro VMs. And sure yes, some human has to sit here and do operations, but the point is that they've truly abstracted the concept of a running server from both their side and yours. It's abstracted to the degree that even asking "what machine am I running on?" doesn't even have a meaningful answer and if you did have the answer you couldn't do anything with it.

Shared hosting with a cgi-bin is closer to this, but it falls short of fully abstracting the details. You're still running on a normal-ish server with shared resources and a web server configuration and all that jazz, it's just that you don't personally have to manage it... But someone really does personally have to manage it.

And anyway, there's no reason to think that serverless platforms are limited to things that don't actually run a server. On the contrary there are "serverless" platforms that run servers! Yes, truly, as far as I know containers running under cloud run are in fact normal HTTP servers. I'm actually not an expert on serverless despite having to be on this end of the argument, but I'll let Google speak for what it means for Cloud Run to be "serverless":

> Cloud Run is a managed compute platform that enables you to run stateless containers that are invocable via HTTP requests. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications.

These PaaSes popularized the term to mean this from the gitgo, just because you have passionately formed a belief that it ever meant something else doesn't change a thing.


> On the contrary there are "serverless" platforms that run servers!

That's the trouble when a term catches on — everyone wants to jump all over it and use it as they please.

This is hardly a unique situation. Look at SQL. According to the very creator of the relational model, SQL isn't relational, but the SQL specification latched onto the term anyway because it was trendy to do so. As a result, today, I think it is fair to say that "relational" has taken on dual meaning, both referring to the model as originally conceived as well as what SQL created.

If you wish to maintain that "serverless" now refers to both an execution model and outsourced management of computer systems, I think that is fair. However, it is apparent that "serverless" was originally popularized by Lamba, named as such due to its CGI-inspired execution model. Other angles came later.


Codd was happy enough to tout SQL as "relational" in his Turing Award address! Maybe you mean Date? He was involved from early on but didn't invent it.

I do think that SQL falls short of the relational-data-bank ideal in a number of important ways, and I mostly agree with Date on them. I just don't agree with Date's saying he's not contradicting Codd's early work.


OK good point. Let's see what Amazon describes the selling point of AWS Lambda is in the original press release from 2014; in fact, so early that it's actually not even the final draft[1]. Surely it will mention something about developers no longer having to write network server applications since (apparently) that is what the "server" in "serverless" is referring to (although this draft actually predates the term "serverless" entirely.)

> SEATTLE – (Nov XX, 2014) – Amazon Web Services LLC (AWS), an Amazon.com company (NASDAQ:AMZN), today announced the introduction of AWS Lambda, the simplest way to run code in the cloud. Previously, running code in the cloud meant creating a cloud service to host the application logic, and then operating the service, requiring developers to be experts in everything from automating failover to security to service reliability. Lambda eliminates the operational costs and learning curve for developers by turning any code into a secure, reliable and highly available cloud service with a web accessible end point within seconds. Lambda uses trusted AWS infrastructure to automatically match resources to incoming requests, ensuring the resulting service can instantaneously scale with no change in performance or behavior. This frees developers to focus on their application logic – there is no capacity planning or up-front resource type selection required to handle additional traffic. There is no learning curve to get started with Lambda – it supports familiar platforms like Java, Node.js, Python and Ruby, with rich support for each language’s standard and third-party libraries. Lambda is priced at $XXX for each request handled by the developer’s service and $YYY for each 250ms of execution time, making it cost effective at any amount of usage. To get started, visit aws.amazon.com/lambda.

Let me emphasize some points here:

> Previously, running code in the cloud meant creating a cloud service to host the application logic...

> then operating the service, requiring developers to be experts in everything from automating failover to security to service reliability...

> Lambda eliminates the operational costs and learning curve for developers by turning any code into a secure, reliable and highly available cloud service with a web accessible end point within seconds.

> there is no capacity planning or up-front resource type selection required to handle additional traffic

It is genuinely impressive how devastatingly, horrifically incorrect the idea is that "serverless" ever had anything to do with whether your application binary has a network request server in it. It's just not a thing.

We can talk about the parallels between CGI servers and Lambda all day and all night, but I am not letting this non-sense go. Serverless is not a marketing term for CGI.

[1]: https://www.allthingsdistributed.com/2024/11/aws-lambda-turn...


This is great, thanks for digging it up!

It does support the thesis that Amazon was attempting to prevent customers from realizing that what they were offering was basically CGI on a big load-balanced server farm, by claiming that it was something radically new that you couldn't get before, but their value proposition is still just the value proposition of shared CGI hosting. On a big load-balanced server farm. Which, to be perfectly fair, probably was bigger than anyone else's.

There is one major difference—the accounting, where you get charged by the megabyte-millisecond or whatever. Service bureaus ("cloud computing vendors") in the 01960s did do such billing, but Linux shared CGI hosts in the 01990s generally didn't; accton(8) doesn't record good enough information for such things. While in some sense that's really the value proposition for Amazon rather than the customer, it gives customers some confidence that their site isn't going to go down because a sysadmin decided they were being a CPU hog.

I agree that there's no evidence that they were talking about "servers" in the sense of processes that listen on a socket, rather than "servers" in the sense of computers that those processes run on.

Just to be clear, I know I'm not going to convince you of anything, but I'm really appreciating how much better informed I'm becoming because of this conversation!


> It does support the thesis that Amazon was attempting to prevent customers from realizing that what they were offering was basically CGI on a big load-balanced server farm, by claiming that it was something radically new that you couldn't get before, but their value proposition is still just the value proposition of shared CGI hosting. On a big load-balanced server farm. Which, to be perfectly fair, probably was bigger than anyone else's.

I loathe to be of service defending Amazon's marketing BS, but I think you're saying the selling point of AWS Lambda is that it's "like CGI", and that serverless functions are substantially equivalent to CGI. I disagree. The programming model of serverless functions is definitely substantially equivalent to CGI, but the selling point of serverless functions isn't the "functions" part, it's the "serverless" part. It would've had the exact same draw and could've even had a very similar programming model (The Lambda SDK makes your applications look like a typical request handling server, probably for development purposes) and ran multi-request servers under the hood and as long as it had the same billing and management most people would've been happy with it. The thing that unites Fargate and Lambda in being "serverless" is the specific way they're abstracting infrastructure.

Amazon could've and could still launch something like CloudCGI if they wanted to, and if it used the same model as Lambda I'm sure it'd be successful. If I had to guess why they didn't, the less cynical answer is that they just felt it was outdated and wanted to make something new and shiny with a nice developer experience. The more cynical answer is probably truer, because vendor lock-in. Even if they did launch something like "CloudCGI" though, it would still be a very big departure from anything people called "CGI hosting".


> but the selling point of serverless functions isn't the "functions" part, it's the "serverless" part.

Yup. That's right. The removal of the server from your application is the selling point. As you subtly hint at, exposing the application as a "function" is a clever API design to facilitate that, but I'm sure you could imagine other ways to achieve the same. It is the "serverless" part that is significant.

HTTP servers, while simple to implement in terms of basic function, aren't easy to design well. There are a lot of more complex considerations, like around security and scaling, which are easy to screw up if you don't have a good handle on what you are doing. All alleviated by just letting someone else, who specializes in it, write the server code for you. If you take the server out of your application, making it "serverless", it becomes much more difficult (I really want to say impossible, but abstractions always leak eventually) to screw that end up.

Which is a very compelling business case, allowing businesses to hire people who aren't experts (read: cheaper) in the intricacies of low level technical details that are outside of their core business logic. That's the selling feature, just like you (and Amazon) say.

Glad we got away from that bizarre definition thinking that "serverless" somehow takes physical hardware out of the picture. Maybe it has also taken on that meaning (as nonsensical as that idea is) through the twisting in the popularity wind – definitions are most definitely influenced by time — but it certainly didn't originate in that vein.


> Lambda eliminates the operational costs and learning curve for developers by turning any code into a secure, reliable and highly available cloud service with a web accessible end point within seconds.

Yup. "turning any code into a cloud service". In other words: No need to write complicated server-bits that can be easy to screw up. Just write a "function" that accepts inputs and returns outputs and let something else will worry about the rest. Just like CGI (in sprit).

It is great that you were willing to share this as it proves without a doubt that Amazon were thinking about (the concept of) CGI during this time. But perhaps all you've been trying to say, poorly, is that "serverless" is no longer limited to marketing just one thing these days?


> Having a guy named Steve manage your servers is not "serverless" by my definition, because it's not about you personally having to manage the server, it's about anyone personally having to manage it. AWS Lambda is managed by Amazon as a singular giant computer

Well, that's sort of true of AWS Lambda, but it's just as true of EC2 and EBS, which aren't called "serverless". Moreover, "serverless" is a marketing term used to sell the service to Amazon customers, who can't tell whether or not there's a guy named Steve working at Amazon who lovingly nurtures each server†, or whether Amazon manages their whole Lambda farm as a giant herd of anonymous nodes, so I don't think it makes sense to argue that this is what it's intended to mean. As you point out, it's kind of a nonsense term since the code does in fact run on servers. I believe you were correct the first time in your earlier comments that you are now contradicting: they call it "serverless" because the customer doesn't have to manage servers, not because their own employees don't have to manage servers (except collectively).

> enables you to run stateless containers that are invocable via HTTP requests. (...) abstracts away all infrastructure management

This is a precise description of the value proposition that old-fashioned CGI hosting offers to hosting customers. (The containers are processes instead of KVM machines like Firecracker or cgroups like Docker, but "container" is a pretty generic term.)

So I think you've very clearly established that CGI scripts are "serverless" in the sense that Google's marketing uses, and, in https://news.ycombinator.com/item?id=44512427, the sense that Amazon's marketing uses.

______

† Well, Steve would probably cost more than what Amazon charges, so customers may have a pretty good guess, but it could be a loss leader or something.


CGI and other "serverless" technologies have essentially the same benefits and drawbacks. Sometimes an AWS Lambda function has longer startup time than if you had a running process already waiting to service a web request, because it's spinning up (AFAIK) an entire VPS. So all the arguments for "serverless" are also arguments for CGI, and all the arguments against CGI are arguments against "serverless".

That's the sense in which I mean "Serverless is a marketing term for CGI." But you're right that it's not, strictly speaking, true, because (AFAIK, e.g.) AWS doesn't actually use the CGI protocol in between the parts of their setup, and I should have been clear about that.

PHP is great as a runtime, but it sucks as a language, so I didn't want to use it. Django in regular CGI would have been fine; I just didn't realize that was an option.


> CGI and other "serverless" technologies have essentially the same benefits and drawbacks. Sometimes an AWS Lambda function has longer startup time than if you had a running process already waiting to service a web request, because it's spinning up (AFAIK) an entire VPS. So all the arguments for "serverless" are also arguments for CGI, and all the arguments against CGI are arguments against "serverless".

Honestly this isn't even the right terminology. The point of "serverless" is that you don't manage a server. You can, for example, have a "serverless" database, like Aurora Serverless or Neon; those do not follow the "CGI" model.

What you're talking about is "serverless functions". The point of that is still that you don't have to manage a server, not that your function runs once per request.

To make it even clearer, there is also Google Cloud Run, which is another "serverless" platform that runs request-oriented applications, except it actually doesn't use the function call model. Instead, it runs instances of a stateful server container on-demand.

Is "serverless functions" just a marketing term for CGI? Nope. Again, CGI is a non-overlapping term that refers to a specific technology. They have the same drawbacks as far as the programming model is considered. Serverless functions have pros and cons that CGI does not and vice versa.

> because it's spinning up (AFAIK) an entire VPS

For AWS Lambda, it is spinning up Firecracker instances. I think you could conceivably consider these to not be entire VPS instances, even though they are hardware virtualization domains.

But actually it can do things that CGI does not, since all that's prescribed is the programming model and not the execution model. For example, AWS Lambda can spin up multiple instances of your program and then freeze them right before the actual request is sent, then resume them right when the requests start flowing in. And like yeah, I suppose you could build something like that for CGI programs, or implement "serverless functions" that use CGI under the hood, but the point of "serverless" is that it abstracts the "server" away and the point of CGI is that it let you run scripts under NCSA HTTPd requests.

Because the programming language models are compatible, it would be possible to adapt a CGI program to run under AWS Lambda. However, the reverse isn't necessarily true, since AWS Lambda also supports doing things that CGI doesn't, like servicing requests other than HTTP requests.

Saying that "serverless is just a marketing term for CGI" is wrong in a number of ways, and I really don't understand this point of contention. It is a return to a simpler CGI-like programming model, but it's pretty explicitly about the fact that you don't have to manage the server...

> PHP is great as a runtime, but it sucks as a language, so I didn't want to use it. Django in regular CGI would have been fine; I just didn't realize that was an option.

I'm starting to come back around to PHP. I can't argue against that it has some profound ugliness, but they've sincerely cleaned things up a lot and made life generally better. I like what they've done with PHP 7 and PHP 8 and think that it is totally suitable for simple one-off stuff. And, package management with composer seems straight-forward enough for me.

To be completely clear, I still haven't actually started a new project in PHP in over 15 years, but my opinion has gradually shifted and I fear I may see the day where I return.

I used to love Django, because I thought it was a very productive way to write apps. There are things that Django absolutely gets right, like the built-in admin panel; it's just amazing to have for a lot of things. That said, I've fallen off with Django and Python. Python may not have as butt-ugly as a past as PHP, but it has aged poorly for me. I feel like it is an easy language to write bugs in. Whereas most people agree that TypeScript is a big improvement for JavaScript development, I think many would argue that the juice just isn't worth the squeeze with gradual typing in Python, and I'd have to agree, I just feel like the type checking and ecosystem around it in Python just makes it not worth the effort. Surprisingly, PHP actually pulled ahead here, adding type annotations with some simple run-time checking, making it much easier to catch a lot of bugs that were once very common in PHP. Django has probably moved on and improved since I was last using it, but I definitely lost some of my appreciation for it. For one thing, while it has a decent ecosystem, it feels like that ecosystem is just constantly breaking. I recall running into so many issues migrating across Django versions, and dealing with things like static files. Things that really should be simple...


I appreciate the notes on the different nuances of "serverless".

I think you might not be very familiar with how people typically used CGI in the 01990s and 02000s, because you say "[serverless] is a return to a simpler CGI-like programming model, but it's pretty explicitly about the fact that you don't have to manage the server..." when that was the main reason to use CGI rather than something custom at the time; you could use a server that someone else managed. But you seem to think it was a difference rather than a similarity.

Why do you suppose we were running our CGI scripts under NCSA httpd before Apache came out? It wasn't because the HTTP protocol was super complicated to implement. I mean, CGI is a pretty thin layer over HTTP! But you can implement even HTTP/1.1 in an afternoon. It was because the guys in the computer center had a server (machine) and we didn't. Not only didn't we have to manage the server; they wouldn't let us!

As for Python, yeah, I'm pretty disenchanted with Python right now too, precisely because the whole Python ecosystem is just constantly breaking. And static files are kind of a problem for Django; it's optimized for serving them from a separate server.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: