Kanjun here, cofounder of Imbue (we put out this blog post, and I'm quite surprised it's on the front page of HN!)
The agent orchestration library (mngr) is open source, so we aren't selling anything. There is literally no way for us to make money on it.
We shipped it this way instead of trying to monetize because we believe open agents must win over closed / verticalized platforms in order for humans to live freely in our AI future. We have plenty of money and runway as a company, and this feels much more important to work on.
Feel free to come back in 10 years when your brain's been rotted by the equivalent of agent ragebait and the digital infrastructure of your life is trapped in the AI lab agent oligopoly, and we can talk.
Hey, Kanjun from Imbue here! This is exactly why we built Sculptor (https://imbue.com/sculptor), a desktop UI for Claude Code.
Each agent has its own isolated container. With Pairing Mode, you can sync the agent's code and git state directly into your local Cursor/any IDE so you can instantly validate its work. The sync is bidirectional so your local changes flow back to the agent in realtime.
Happy to answer any questions - I think you'll really like the tight feedback loop :)
Founder of Imbue / Sculptor here — let me know if you give Sculptor a shot! We like running Claude agents in Sculptor in YOLO mode.
re: pricing — our intent is to open source a lot of what's available today, actually. We really believe open agents are critical for humans to be free in an AI future.
I resonate on the exhaustion — actually, the context switching fatigue is why we built Sculptor for ourselves (https://imbue.com/sculptor). We usually see devs running 4-6 agents in parallel today using Sculptor today. Personally I think much of the fatigue comes from:
1) friction in spawning agents
2) friction in reviewing agent changes
3) context management annoyance when e.g. you start debugging part of the agent's work but then have to reload context to continue the original task
It's still super early, but we've felt a lot less fatigued using Sculptor so far. To make it easier to spawn agents without worrying, we run agents in containers so they can run in YOLO mode and don't interfere with each other. To make it easy to review changes, we made "Pairing Mode", lets you instantly sync any agent's work from the container into your local IDE to test it, then switch to another.
For context management, we just shipped the ability to fork agents form any point in the convo history, so you can reuse an agent that you loaded with high-quality context and fork off to debug an agent's changes or try all options it presented. It also lets you keep a few explorations going and check in when you have time.
Anyway, sorry, shilling the product a bit much but I just wanted to say that we've seen people successfully use more than 2 agents without feeling exhausted!
Imbue | Sr. Product Engineer | San Francisco (ONSITE) | Full-time
Company: Imbue builds AI systems that reason and code, enabling AI agents to accomplish larger goals and safely work in the real world. We train our own foundation models optimized for reasoning and prototype agents on top of these models. By using these agents extensively, we gain insights into improving both the capabilities of the underlying models and the interaction design for agents. We recently launched our first product, Sculptor: https://imbue.com/sculptor/
We aim to rekindle the dream of the personal computer, where computers become truly intelligent tools that empower us, giving us freedom, dignity, and agency to pursue the things we love.
Great question! Agents in Sculptor run in containers vs. locally on your machine, so they can all execute code simultaneously (and won't destroy your machine).
Ultimately, our roadmaps are pretty different — we're focused ways to help you easily verify agent code, so that over time you can trust it more and work at a higher level.
Towards this, today we have a beta feature, Suggestions, that catches issues/bugs/times when Claude lies to you, as you're working. That'll get built out a lot over the next few months.
So happy to hear this! We'd love to hear what you think — feel free to ping me on X. We're also very active on Discord: https://discord.com/invite/sBAVvHPUTE
reply