Just to add here, even aside from these trade-offs...
The main alternative is: host the hardware yourself, as it might be enough.
Cool. How many people here are also great sysadmins? Probably a very small number. That's not really an alternative. And furthermore, most of the counter point doesn't really exist.
They just mainly pointed out that "yeah, devops is hard, and lots of things are done by devops that seem simple but arent" but they didn't make an argument that aws is somehow an overnengineering thing that you can avoid if you would only do XYZ and remember that you probably have small scale.
> How many people here are also great sysadmins? Probably a very small number.
I am. And there probably aren’t that many around here due to the extreme prejudice and derision Developers direct towards Sysadmins. We’re apparently a bunch of low-skill knuckle-dragging hardware monkeys, while at the same time able to do things so difficult that no developer can figure it out so they just go to the cloud instead “to avoid learning all that stuff”.
No, hosting the hardware yourself is not the main alternative.
Very few non-cloud users actually host the hardware themselves. You can rent dedicated servers or even VMs just about anywhere. The hosting company manages and maintains the hardware as part of the monthly price.
A lot of small sites can probably do great without a sysadmin, nor AWS. If you think about it AWS can be a pre-mature optimization when just running your site off a Rapsberry Pi in your basement will do.
Heh, technically t2 instances are burstable CPU so you can't control when you'll be pushed off CPU time. With a RasPi you have full control of the compute. Of course, the t2 has faster transit to pretty much any IX, is in a DC with a UPS, has an SLA for staying up, etc. so from a networking perspective it's leaps and bounds better. Depends on if you need the compute or not.
Right, so if and when you determine you actually need more compute, it's trivial to bump the instance type. Not so much in the basement, seems pretty pointless.
If you mean re-attach your EBS storage to a bigger instance type, you can just remove the SD card and insert into a bigger computer (or clone it to a hard drive).
You don't need to manually manage EBS. Shutdown instance, change instance type, boot.
I guess you could buy raspberry pi with more memory (but not more cores) and swap the sd card, but beyond that you are stuck. aarch64 rootfs will not boot on commodity x86 hardware. sd card performance and reliability is many times worse than EBS anyway.
Put nginx in front of a bunch of Pis and load balance them! Just clone the SD card and distribute to a bunch of Pis. I mean, you'll have to scale horizontally at some point even on EC2 and it's roughly the same complexity. If the value prop of EC2 is hardware abstraction, I don't think it offers that much over Raspberry Pis, unless you are going for higher performing hardware.
This feels disingenuous, like pushing a DIY narrative, similar to what the OP is doing. Whether you think it offers value or not, a Rasberry Pi at home is _far_ from a cloud VPS. A cloud VPS is behind a net connection with an uptime SLA, has redundant power, can have an SLA of its own, is not NATed, has a fast NIC, and usually is low latency to transit to because of presence in an IX. If you're equating the two, you don't understand why datacenters exist. You can _reject_ this value, but you'd need to be clear about what your expectations are out of then IMO. It's fair to make a comparison between a coloed host or a VPS in another cloud service (like DO or Hetzner) with AWS because it comes with most of the above things (just usually worse transit), but to deny that there's value in a datacenter feels like willful ignorance.
Just FYI residential ISPs are absolutely terrible. Even my ISP, a FTTH one, has uptime lower than 99%. This is aside from layers of CGNAT and your own home equipment's SLA. If you're willing to run a product on this kind of RasPi infrastructure feel free, but don't claim there isn't any value in a datacenter.
My point is that a lot of people don't need all those things. As the article points out, a managed dedicated server offers most of those benefits and the redundancy plus "easy scaling" of AWS is not even necessary if your goal is just to avoid debugging hardware issues.
It's easy to setup a home raspberry pi. It's easy to go one step up and setup a colocated raspberry pi. It's easy to setup a managed dedicated server. Auto scaling virtually provisioned hardware is great for a big company like Netflix with actual daily fluctuating demand. Most people don't need such advanced scaling features.
I previously had Sonic.net, which was one of the best fiber offerings for consumers available. 1 gbps up and down, unmetered, and was pretty much never down during the period I had it.
Maybe I don't want to learn Raspberry Pi to start my little SaaS idea. To search and click in the AWS console to get some MVP up and running cannot be beaten that easily. Can I get a Raspberry Pi site with a few 100$/month, my efforts included?
I'd argue, for an average software engineer, setting up a Raspberry pi is going to be easier than setting up an EC2 instance. It almost sounds like you've never actually done the "click in the AWS console to get some MVP" running before.
Sorry if this is too harsh, but if you can't figure out how to run an app on a Raspberry Pi then I definitely would't trust you to click around the AWS console.
Tradeoffs. If you can run your service on a residential internet SLA (or lack thereof) with residential CGNAT, and you have the ops chops to maintain Linux installs yourself, then do it. If you can run your business with a business-consumer SLA do it. AWS is much more reliable than that.
Realistic AWS non-cloud alternatives are either colo-ing in a DC, using another semi-cloud provider (OVH, Linode, Hetzner, et al.), or buying a DIA circuit for your office and running your own servers from there.
Well, you still have to patch and secure your OS on EC2. As far as EC2, AWS is really just taking over the hardware portion and making it somewhat easier to scale. But rapidly scaling is not something most apps do, even big ones.
If we're talking about serverless then I think that containerization (either running containers on bare metal, or on kubernetes) changes the value prop of serverless a lot, because if you've got a container environment you can easily just clone an off-the-shelf production-ready container to deploy your app.
As far as your host OS goes get a production ready image to run your production ready container images. It's a one time thing. Keeping the OS updated? This is not brain surgery every time there is an update. Plus you can configure it to automatically install security updates.
> Well, you still have to patch and secure your OS on EC2. As far as EC2, AWS is really just taking over the hardware portion and making it somewhat easier to scale. But rapidly scaling is not something most apps do, even big ones.
You're stuck on this theme that AWS is somehow akin to scaling. That's one benefit of AWS but it's not the only one. At small scale the margins on AWS are peanuts. A t4g micro is $6.15 / mo. The equivalent on Digital Ocean is $5 / mo. Buying your own Raspberry Pi 4 would be ~ $70 with an enclosure/peripherals, so you'd break even at ... 14 months of running your t4g. This isn't counting the power used (which would probably be minimal on a Raspberry Pi.) That overhead is nothing.
> As far as your host OS goes get a production ready image to run your production ready container images. It's a one time thing. Keeping the OS updated? This is not brain surgery every time there is an update. Plus you can configure it to automatically install security updates.
There's more to it than that. You're just thinking about running software not how packets get from a user's machine to your running software. Most residential connections don't come with a stable/static IPv4. You can update a DNS entry with your changing IP, but then you're down for however long it takes you to change your A record and however long your domain's TTLs to expire. If you pay for a static IPv4 then you've already paid for more than what you're getting from a cloud VPS. Then there's the fact that residential ISPs block tons of ports, have no SLAs on uptime, can drop your traffic without warning or recourse, etc etc.
If you're running a tiny, mostly-static site with minimal uptime requirements then you'll pay less and spend much less effort using a shared webhosting platform. They'll do all the ops for you and you get charged peanuts since these providers usually colo their own machines and run hundreds of sites on them. Dreamhost can serve a Wordpress site for $1.99 / mo with no ops work required. That pays for 35 months of running a Raspberry Pi.
A Raspberry pi is $35, for the latest model, and you definitely do not need an enclosure. You do need a power adapter, which may be around $10. But also, an older Raspberry pi can be even cheaper!
I'm not advocating for people hosting sites on a Raspberry pi in their homes, but it's certainly easy to do and if your operation is small enough and you have an extra computer lying around the cost is actually near $0.
Pretty much every ISP I've used in the US has had mostly static IPs. They usually didn't change unless the modem got rebooted. This is good enough for your minecraft server or unimportant personal website.
But of course if your residential internet is no longer serving your hosting needs, you can go a step up and get a virtual host or colocate your old computer! Yes colocation will cost a lot more than a virtual host on a PHP shared host, but you also get more computing bang for your buck.
The main alternative is: host the hardware yourself, as it might be enough.
Cool. How many people here are also great sysadmins? Probably a very small number. That's not really an alternative. And furthermore, most of the counter point doesn't really exist.
They just mainly pointed out that "yeah, devops is hard, and lots of things are done by devops that seem simple but arent" but they didn't make an argument that aws is somehow an overnengineering thing that you can avoid if you would only do XYZ and remember that you probably have small scale.