super interesting pseudo IPC channel and at least mildly concerning from a security perspective. saw it on your site first and am shocked there is not a single other comment yet here
was hoping to find at least one “cmon this is easy to avoid with X thing in the kernel/OS” info nugget dropped
I'm not sure how much of a security concern this one is, at least for the kinds of things I care about with respect to containers.
I want my containers to be able to run work without other containers spying on them (already hard thanks to timing attacks).
This IPC channel only works if both containers are collaborating together. I don't think you can use it to spy on my container if my container isn't actively participating.
Agreed that this is not a critical problem, and the cooperative side channel can be useful in otherwise uncooperative environments.
The article does mention wanting to coordinate across multiple identical processes running on the same node in a wide variety of environments as the motivator.
two well-balanced takes making me think I should embrace the fun parts of this design and worry less about the risks! it’s a pretty cool idea and impressive it works
really impressed with this. the author discusses on Reddit how they built it with Unity which I think is super cool and a really good use case for a game engine
You are correct, C# straddles that line better than any other language right now imo thanks to the APIs you linked. There was a good write up about this Rust vs C# Span comparison on HN a few weeks ago but the link escapes me
Well five is about the amount of sales people I remember joining an absolutely awful call with them a few years ago, so that’s my guess (edit: lol at getting downvoted for relaying an actual experience that happened. been using Vagrant since the original hobo logo circa 2012-2013 and have always been a HC fan, get off your high horse)
Cool post. Wonder if it would have helped to take a look at the MIT distributed systems course on the web and YouTube - one of the projects is exactly this (Go Raft implementation)
They have done this previously for dual socket Xeons. Historical precedence doesn’t necessarily hold here, but in fact, it’s been done on the “cheese graters” previously
You and I both, though they are a blessing and a curse. Profiles used to be my biggest issue with local and remote cred resolution, and then we layered AWS SSO on top of profile management, which doubled my problems. It’s all technically more secure and ultimately cleaner to work with when you know what to do, but trying to figure out how to transparently pass role based IAM creds into a running Fargate container to the AWS SDK was a lesson in pain (not to mention designing that to work locally). Lambda can fall through to an SSO/managed profile env fine if running w/o the container wrapper, and SAM plugins are pretty magic for making it work if you use their container, but otherwise I have been strongly avoiding custom OCI containers w/o SAM because the dev SDLC is going to require all kinds of env tweaking and cred directory mounts.
If you know and understand S3 pretty well, and you purely need to generate, store, and read materialized static views, I highly recommend S3 for this use case. I say this as someone who really likes working with DDB daily and understands the tradeoffs with Dynamo. You can always layer on Athena or (simpler) S3 Select later if a SQL query model is a better fit than KV object lookups. S3 is loosely the fire and forget KV DB you’re describing IMO depending on your use case
was hoping to find at least one “cmon this is easy to avoid with X thing in the kernel/OS” info nugget dropped