On price, you could definitely do better than $8/million steps, $0.25/GB written and $0.15/GB-month for state storage, but if you were designing something generic on S3/DynamoDB (state + status) to support all use cases at all scales, you'd probably end up spending something around the same order or magnitude.
But if you did that, you'd also have to implement it all yourself. This is a relatively simple checkpointing workflow orchestrator across standard Lambda functions, but with some really nice touch surfaces in the Lambda API itself.
What's only a footnote in the announcement is that this is only us-east-2 (Ohio) and TypeScript/JS + Python at the moment. Basically a public preview release. I look forward to seeing where they take this.
Sounds like a step in the right direction. I would like to see an all-up dashboard of everything in the shared state, and good control over upgrades (maybe a mode where in-progress functions can complete on version 1, even if new functions are getting kicked off on version 2, etc.)
Still $0.20/million requests, but all Lambda functions run on provisioned EC2 instances (taking into account savings plans & reservations) with a 15% premium.
You can dial up/down the vCPU:RAM ratio so that if you have a lot of functions that just, for example, wait on IO, you can use a very high ratio to run many more functions in parallel on a single instance.
This looks like it will provide an interesting middle-cost option for services at scale with a more predictable load or usage pattern better suited to higher (or lower I guess) ratios without having to sacrifice any effort already put into Lambda and lets you still use it with other AWS services (Cognito auth, IoT Events, simplified Kinesis/DynamoDB Streams client that doesn't require Java, etc).
Running your own AuthN/AuthZ with an off-the-shelf OSS is very straight-forward (as a SaaS product at least) and isn't any more burdensome from a security perspective than what you're already doing for your core service.
One feature failing like this should probably log the error and fail closed. It shouldn't take down everything else in your big proxy that sits in front of your entire business.
While there appears to be some us-east-1 SPoF for Route 53 updates (as shown recently), the actual health checks themselves occur in up to 8 different regions [1] with an 18%[2] agreement of failure required to initiate a failover.
AWS has very good isolation between regions and, while it relies on us-east-1 for control plane updates to Route 53, health checks and failovers are data plane operations[3] and aren't affected by a us-east-1 outage.
Relying on a single provider always seems like a risk, but the increased complexity of designing systems for multi-cloud will usually result in an increased risk of failure, not a decrease.
1. us-east-1, us-west-1, us-west-2, eu-west-1, ap-southeast-1, ap-southeast-2, ap-northeast-1 and sa-east-1 which defaults to all of them.
Just don't try putting something convenient in between, at least that's what my adventures in TB4 taught me: displayport from a TB port works fine, even when DP goes to a multiscreen daisychain and the TB does PD to the laptop on the side, but try multiscreen through a hub and all bets are off. I think it's the hubs overheating and I've seen that even on just 2x FHD (Ok, that one was on a cheap non-TB hub, but I also got two certified TB4 hub hubs to fail serving 2x "2.5k" (2560x1600). And those hubs are expensive, I believe that they all run the same Intel chipset.
That would require monitors supporting daisy chain in the first place and I never had any problems with them anyways. Likely related to not using a full on hub but a minimalistic dongle with a DP outlet, a PD inlet and a USB outlet (which then goes to a USB hub switch managing access to simple hubs serving all those low bandwidth peripherals like the mouse).
The failing hubs were either driving cheap office displays connected through HDMI or high resolution mobile displays connected through USB-C. Few of those support anything like daisy chaining or at least simple PD passthrough so that you can use the same port for driving the display and powering the laptop, and I absolutely do want dual mobile displays. Even if only so that I can carry them screen to screen for mutual protection of the glass.
reply