Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Old school colo/dedicated servers/etc. There's something delightfully simple about only having to deal with "standard" hardware failures.


Not to mention unless you have very unusual traffic patterns (spin up lots of servers for short periods of time), colo/dedicated servers will usually be vastly cheaper than EC2, especially because with a little bit of thought you can get servers that are substantially better fit for your use.

E.g. I'm currently about to install a new 2U chassis in one of our racks. It holds 4 independent servers each with with dual 6 core 2.6GHz Intel CPUs, 32GB RAM and a SSD RAID subsystem that easily gives a 500MB/sec throughput.

Total leasing cost + cost of a half rack in that data centre + 100Mbps of bandwidth is ~ $2500/month. Oh, and that leaves us with 20U of space for other servers, so every additional one adds $1500/month for the next 7-8 or so of them (when counting some space for switches and PDU's). Amortized cost of putting 2U with 100Mbps in that data centra is more like $1700/month.

Amazon doesn't have anything remotely comparable in terms of performance. To be charitable to EC2, at the low end we'd be looking at 4 x High Mem Quadruple Extra Large instances + 4 x EBS volumes + bandwidth and end up in the $6k region (adding the extra memory to our servers would cost us an extra $100-$200/month in leasing cost, but we don't need it), but the EBS IO capacity is simply nowhere near what we see from a local high end RAID setup with high end SSD's, and disk IO is usually our limiting factor. More likely we'd be looking at $8k-$10k to get anything comparable through a higher number of smaller instances).

I get that developers like the apparent simplicity of deploying to AWS. But I don't get companies that stick with it for their base load once they grow enough that the cost overhead could easily fund a substantial ops team... Handling spikes or bulk jobs that are needed now and again, sure. As it is, our operations cost in man hours spent, for 20+ chassis across two colo's is ~$120k/year. $10k/month or $500/per chassis. So consider our fully loaded cost per box at ~$2200k/month for quad-server chassis of the level mentioned above with reasonably full racks. Lets say $2500 again to be charitable to EC2...

This is with operational support far beyond what Amazon provides, as it includes time from me and other members of staff that knows the specifics of our applications, handles backups, handles configuration and deployment etc.

I've so far not worked on anything where I could justify the cost of EC2 for production use for base load, and I don't think that'll change anytime soon...


If disk performance is important you can also take a look at the High IO instances, which give you 2x 1TB SSDs, 60GB of RAM and 35 ECUs across 16 virtual cores. At 24x7 for 3 years you end up with ~$656/mo per instance, plus whatever you would need for bandwidth. By the time you fill up an entire rack it still ends up being slightly more expensive than your amortized 2U cost, but you also don't need to scale it up in 2U increments.


Completely agree, building your own is cheaper, gives more control, etc. But what is more: you do NOT lose the ability to use the cloud for added reliability: it is pretty cheap to have an EC2 instance standing by that you fail over to.

If you are very database heavy, and you want to be able to replicate that to the cloud in real time it does get expensive, but if you can tolerate a little downtime while the database gets synced up and the instances spin up that's cheap too.


We have SQL Server 2008 boxes with 128GB+ of ram; we're able to run all of our production databases right out of memory. This would be cost-prohibitive in a virtualize environment such as AWS, Linode, etc.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: