> There are dozens of projects like this emerging right now. They all share the same challenge: establishing credibility.
Care to elaborate on the kind of "credibility" to be established here? All these bazillion sandboxing tools use the same underlying frameworks for isolation (e.g., ebpf, landlock, VMs, cgroups, namespaces) that are already credible.
The problem is that those underlying frameworks can very easily be misconfigured. I need to know that the higher level sandboxing tools were written by people with a deep understanding of the primitives that they are building on, and a very robust approach to testing that their assumptions hold and they don't have any bugs in their layer that affect the security of the overall system.
Most people are building on top of Apple's sandbox-exec which is itself almost entirely undocumented!
I'm sure 100% of them are vibe coded. We were all wondering where this new era of software is, and now it's here, a bunch of nominally different tools that all claim to do the same thing.
I'm thinking the LocalLLM crowd should take their LLMs to trying to demolish these sandboxes.
And this is exactly why we see noise on HN/Reddit when a supply-chain cyberattack breaks out, but no breach is ever reported. Enterprises are protected by internal mirroring.
I'm assuming you are talking about agents like claude-code and open-code which rely on GPT functions (AKA Large Language Models).
The reason they don't detect these risks is primarily because these risks are emergent, and happen overnight (literally in the case of axios - compromised at night). Axios has a good reputation. It is by definition impossible for a pre-trained LLM to keep up with time-sensitive changes.
I mean that agents can scan the code to find anything "suspicious". After all, security vendors that claim to "detect" malware in packages are relying on LLMs for detection.
An LLM is not a suitable substitute for purpose-built SAST software in my opinion. In my experience, they are great at looking at logs, error messages, sifting through test output, and that sort of thing. But I don't think they're going to be too reliable at detecting malware via static analysis. They just aren't built for that.
I know, right? The day I initially thought about posting this, there was another one called `yolo-box`. (That attempt--my very first post--got me instantly shadow-banned due to being on a VPN, which led to an unexpected conversation with @dang, which led to some improvements, which led to it being a week later.)
I think it's the convergence of two things. First, the agents themselves make it easier to get exactly what you want; and second, the OEM solutions to these things really, really aren't good enough. CC Cloud and Codex are sort of like this, except they're opaque and locked down, and they work for you or they don't.
It reminds me a fair bit of 3D printer modding, but with higher stakes.
(A small number of samples can poison LLMs of any size) to save clicks to read the headline
The way I think of it is, coding agents are power tools. They can be incredibly useful, but can also wreak a lot of havoc. Anthropic (et al) is marketing them to beginners and inevitably someone is going to lose their fingers.
Docker isn't virtualization; it's not that hard to infiltrate the underlying system if you really want to. But as for VMs--they are enough! They're also a lot of boilerplate to set up, manage, and interact with. yolo-cage is that boilerplate.
On that note, yolo-cage is pretty heavyweight. There are much lighter tools if your main concern is "don't nuke my laptop." yolo-box was trending on HN last week: https://news.ycombinator.com/item?id=46592344
My experience is that neither has a good UX for what I usually try to do with coding agents. The main problem I see is setup/teardown of the boxes and managing tools inside them.
It probably is. Some of this stuff will hang around because power users want control. Some of it will evolve into more sophisticated solutions that get turned into products and become easier to acquihire than the build in house. A lot of it will become obsolete when the OEMs crib the concept. But IMO all of those are acceptable outcomes if what you really want is the thing itself.
The solvers are a problem but they give themselves away when they incorrectly fake devices or run out of context. I run a bot detection SaaS and we've had some success blocking them. Their advertised solve times are also wildly inaccurate. They take ages to return a successful token, if at all. The number of companies providing bot mitigation is also growing rapidly, making it difficult for the solvers to stay on top of reverse engineering etc.
That's a good question. I haven't checked the stats to see how often it happens but I will make a note to return with some info. We're dealing with the entire internet, not just YC companies, and many scrapers / solvers will pass up a user agent that doesn't quite match the JS capabilities you would expect of the browser version. Some solving companies allow you to pass up user agent , which causes inconsistencies as they're not changing their stack to match the user agent you supply. Under the hood they're running whatever version of headless Chrome they're currently pinned to.
Care to elaborate on the kind of "credibility" to be established here? All these bazillion sandboxing tools use the same underlying frameworks for isolation (e.g., ebpf, landlock, VMs, cgroups, namespaces) that are already credible.
reply