I'm curious about the current AI startup landscape and how many companies are essentially just wrapping foundation model APIs (OpenAI, Anthropic, etc.) with a UI layer versus doing more substantial technical work.
Some questions I'm interested in:
- How can you tell if a company is primarily using foundation model APIs?
- What percentage of AI startups fall into this category?
- Are there examples of companies doing this particularly well or poorly?
- What constitutes legitimate value-add on top of foundation models?
But we're seeing companies like Cursor which pay a lot of attention to how the models interact with everything. They're not just prompting the AI, they index files, they search and mimic styles. You can @ a certain file to use it as a reference. The composer is autonomous, even extracts commands from the AI to run, or extracts only the code needed. And it double checks that what it's writing is true. There's a whole system in there, it represents more like a modern car with proper driveshaft, gears, pedals and stuff. Copilot still feels like a box on four wheels attached to an engine.
Perplexity was a wrapper earlier on and arguably it might still be. But they've used AI more effectively than Google to figure out what a person is actually trying to search for, and suggest those things.
I think most of these companies will have minimal value added at the start and slowly start to refine it. It's hard to say what percentage.