Hacker Newsnew | past | comments | ask | show | jobs | submit | anon84873628's commentslogin

Not to mention that "nothing tastes like it naturally" is false. Plenty of fruits have a jelly like consistency, they're just not common in the modern western world. Consider ripe persimmons, caimito, or abiu. Jelly palm and quince are cooked into literal jelly. Further afield you also have aloe leaf and cooked nopal.

Or those living in the Caves of Steel!

Yes, this is one of those game theory traps like the prisoners dilemma, because it requires coordinated action across a large group of people. Unfortunately the lowest common denominator parenting is not able to handle the problem, because the parents don't understand the situation, are addicted to platforms themselves, and just generally don't have the necessary skills.

Government regulation is a ham fisted approach that risks unintended consequences / secondary effects, but it is generally good at breaking the game theory traps because it changes the playing field for everyone. That is fundamentally why we have government at all - to solve coordination problems.


The government can also act as the faceless bad guy who 13 year olds can get mad at while parents shrug and say “sorry that’s just the law”.

In other words it solved the multi-agent coordination problem amongst parents, which otherwise would require the majority of them to be rational and good (a tall order).

My bet is that it will be good enough to devise the requirements.

They already can brainstorm new features and make roadmaps. If you give them more context about the business strategy/goals then they will make better guesses. If you give them more details about the user personas / feedback / etc they will prioritize better.

We're still just working our way up the ladder of systematizing that context, building better abstractions, workflows, etc.

If you were to start a new company with an AI assistant and feed it every piece of information (which it structures / summarizes synthesizes etc in a systematic way) even with finite context it's going to be damn good. I mean just imagine a system that can continuously read and structure all the data from regular news, market reports, competitor press releases, public user forums, sales call transcripts, etc etc. It's the dream of "big data".


If it gets to that point, why is the customer even talking to a software company? Just have the AI build whatever. And if an AI assistant can synthesize every piece of business information, why is there a need for a new company? The end user can just ask it to do whatever.

Maybe yes. It takes time for those structures to "compact" and for systems to realign.

Ah, so the "I haven't needed it so it must be useless" argument.

There is huge value in having vendors standardize and simplifying their APIs instead of having agent users fix each one individually.


Possible legit alternative:

Have the agents write code to use APIs? Code based tool calling has literally become a first party way to do tool calling.

We have a bunch of code accessible endpoints and tools with years of authentication handling etc built in.

https://www.anthropic.com/engineering/advanced-tool-use#:~:t...

Feels like this obviates the need for MCP if this is becoming common.


That solution will not work as well when the interfaces have not been standardized in a way that makes it so easy to import them into a script as a library.

Coding against every subtly different REST API is as annoying with agents as it is for humans. And it is good to force vendors to define which parts of the interface are actually important and clean them up. Or provide higher level tasks. Why would we ask every client to repeat that work?

There are also plenty of environments where having agents dynamically write and execute scripts is neither prudent nor efficient. Local MCP servers strike a governance balance in that scenario, and remote ones eliminate the need entirely.


It's not particularly hard for current models to wire up a http client based on the docs and every major company has well documented APIs for how to do so either with their SDKs or curl.

I don't know that I really agree its as annoying for agents since they don't have the concept of annoyance and can trundle along infinitely fine.

While I appreciate the standardization I've often felt MCPs are a poor solution to a real problem that coincided with a need for good marketing and a desire to own mindspace here from Anthropic.

I've written a lot of agents now and when I've used MCP it has only made them more complicated for not an apparent benefit.

MCP's value lies in the social alignment of people agreeing to use it, it's technical merits seem dubious to me while its community merits seem high.

I can accept the latter and use it because of that while thinking there were other paths we probably should have chosen that make better use of 35 years of existing standards.


I don’t agree on the first part. What sort of llm can’t understand a swagger spec? Why do you think it can’t understand this but can understand mcp?

On runtime problems yes maybe we need standardisation.


Well if everyone was already using Swagger then yes it would be a moot point. It seems you do in fact agree that the standardized manifest is important.

Wait why do you assume any standardisation is required? Just put the spec whether swagger or not

If everyone had a clear spec with high signal to noise and good documentation that explains in an agent-friendly way how to use all the endpoints while still being parsimonious with tokens and not polluting the context, then yes we wouldn't need MCP...

Instructing people how to do that amounts to a standard in any case. Might as well specify the request format and authentication while you're at it.


I don’t get your point. Obviously some spec is needed but why does it have to be MCP?

if I want my api to work with an llm id create a spec with swagger. But why do I have to go with mcp? What is it adding additionally that didn’t exist in other spec?


You can ask an AI agent that question and get a very comprehensive answer. It would describe things like the benefits of adding a wire protocol, having persistent connections with SSE, not being coupled to HTTP, dynamic discovery and lazy loading, a simplified schema, less context window consumption, etc.

So you're basically saying: "nobody is using the standard that we have defined, let's solve this by introducing a new standard". Fair enough.

Yep. And those that did implement the standard did so for a different set of consumers with different needs.

I'm also willing to make an appeal to authority here (or at least competitive markets). If Anthropic was able to get Google and others on board with this thing, it probably does have merit beyond what else is available.


I thought the whole point of AI was that we wouldn't have to do these things anymore. If we're replacing engineering practice with different yet still basically the same engineering practice, then AI doesn't buy us much. If AI lives up to their marketing hype, then we shouldn't need MCP.

Hm. Well maybe you are mistaken and that dichotomy is false.

Then what's the point of AI?

To write code. They still depend on / benefit from abstractions like humans do. But they are (for now) a different user persona with different needs. Turns out you can get better ROI and yield ecosystem benefits if some abstractions are tailored to them.

You could still use AI to implement the MCP server just like humans implemented Open AI for each other. Is it really surprising that we would need to refactor some architecture to work better with LLMs at this point? Clearly some big orgs have decided its worth the investment. You may not agree and that's fine - that happens with every type of new programming thing. But to compare generally against the "marketing hype" is basically just a straw man or nut picking.


> There is huge value in having vendors standardize and simplifying their APIs

Yes, and it's called OpenAPI.


My product is "API first". Every UI task has an underlying endpoint which is defined in the OpenAPI spec so we can generate multiple language SDK. The documentation for each endpoint and request/response property is decent enough. Higher level patterns are described elsewhere though.

90% of the endpoints are useless to an AI agent, and within the most important ones only 70% of the fields are relevant. The whole spec would consume a huge fraction of context tokens.

So at a minimum I need a new manifest with a highly pared down index.

I'm not claiming that we're not in this classic XKCD situation, but the point of the cartoon is that that just how it be... https://xkcd.com/927/

Maybe OpenAPI will be able to subsume MCP and those manifests can be generated from the same spec just like the SDKs themselves.


The fraction is a lot higher than 2/3 and tool calls are how you give it useful determinism.

Even if each agent has 95% reliability, with just 5 agents in the loop the whole thing is just 77% reliable.

Well fortunately that's not what actually happens in practice.

Shell scripts written by nearly every product company out there.

There are lots of small and niche projects under the Linux Foundation. What matters for MCP right now is the vendor neutrality.


Are you saying nearly every product company uses MCP? What a stretch

Welcome to the era of complex relationships with the truth. People comparing MCP to k8s is only the beginning.

Truth Has Died

Lemme ask an AI to double check that vibe.

I'd say this thread is both comparing and contrasting them...

Quaint. People 1%, AI 99%.

I meant to say every enterprise product

It doesn't matter because only a minority of product companies worldwide (regardless enterprise or not) uses MCP. I'd bet only minority uses LLMs in general.

Oh so is that "truth" or "vibes" as the sibling comments are laughing about?

No, it's just another statement with no sources just like you:)

Don't worry, there will be algorithms to help you find what you like. And content will still go viral within subcultures.

As always, anticipated (at least in some sense) by Neal Stephenson:

https://www.wired.com/1994/10/spew/


Except the algorithms don’t help me find new things I like. They never have, and I’m starting to suspect that they never will.

What they find - what they’re designed to find - is more of the same. Which is only “more things I like” in à very, very shortsighted sense.


Maybe this is because of scarcity.. if existing algos are applied on top of infinitely generated entertainment then perhaps we'll see something even more addictive than YouTube.

My third time sharing this link in this post because it's just so relevant. A Slate Star Codex classic:

https://slatestarcodex.com/2017/12/28/adderall-risks-much-mo...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: