Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AWS autoscaling does not take your application logic into account, which means that aggresive downscaling will, at worst, lead your applications to fail.

I'll give a specific example with Apache Spark: AWS provides a managed cluster via EMR. You can configure your task nodes (i.e. instances that run the bulk of your submitted jobs to Spark) to be autoscaled. If these jobs fetch data from managed databases, you might have RDS configured with autoscaling read replicas to support higher volume queries.

What I've frequently see happening: tasks fail because the task node instances were downscaled at the end of the job, because they are no longer consuming enough resources to stay up, but the tasks themselves haven't finished. Or tasks failed because database connections were suddenly cut off, since RDS read replicas were no longer transmitting enough data to stay up.

The workaround is to have a fixed number of instances up, and pay the costs you were trying to avoid in the first place.

Or you could have an autoscaling mechanism that is aware of your application state, which is what k8s enables.



> since RDS read replicas were no longer transmitting enough data to stay up.

As an infra guy, I’ve seen similar things happening multiple times. This could be a non problem if developers handled the connection lost case, reconnection with retries and stuff.

But most developers just don’t bother.

So we’re often building elastic infrastructure that is consumed by people that write code as if we were still on the late 90ies with the single instance dbs expected to be always available.


Asgs can do both of those things, it’s a 5% use-case so it takes a little more work but not much


Can you elaborate on that "little more work", given that resizing on demand isn't sufficient for this use-case, and predictive scaling is also out of the question?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: