Hacker Newsnew | past | comments | ask | show | jobs | submit | hmdai's commentslogin

Genuine question, why can't this be done via an API that the agents call? there are already established ways to call APIs on behalf of the user. Seems to me that the agent is loading a web app just to be able to access it's apis, what am i missng?

Yeah, we could have just standarized around a path to api specs. Maybe .well-known/openapi.yaml

Maybe it's cynical, but the best reason I can come up with is that 'established common url for api specs' does not sound nearly as cool on a CV or when talking about the next promotion as 'invented WebMCP'. And for those implementing it on their websites 'we implemented WebMCP' is again much more 'AI-first' than 'we uploaded our API specs'.


I absolutely love this, bonus: I can now read Cuneiform numbers, if I ever need that.

Suggestion: You can potentially show the Cuneiform time in the url.

sent at: 𒌋:𒎙𒐛:𒐏𒐗


Since the introduction of Model Context Protocol, I've been wondering why this protocol is so complicated to work with and after many wasted hours and a few MCP spec updates, I've decided to write down what I think MCP should have been and I call it Naive Context Protocol ¯\_(ツ)_/¯.

Maybe this approach is in fact naive (please tell why!), the "spec" is very minimal at this point (I will expand based on feedback here) and it probably ignores some use-case (some of them on purpose), but I would like to hear:

1. What everyone here thinks the MCP/NCP should've been/should be?

2. What use-cases would you like a context protocol to support?


I'm building a client-side encrypted personal management tool for myself, with support for file encryption:


Does anyone know if the search will be available through their api? seems like a unique offering that not even google has (to my knowledge at least)


Inspired by Paul Graham tweet:

"You could probably make a lot of money simply by investing in companies that a significant percentage of the latest YC batch use. They're the quintessential early adopters."

I wanted to see what services are used on batch W24 company websites. Of course this not a complete representation since it doesn't include internal, server side or behind authentication services.

These numbers are out of 197 total companies in batch W24:

- Octolane is the most popular company in W24 so far, used by 10 within the batch.

- 88 use Google analytics vs 2 that use Plausible

- Many use webflow (50) or framer (42) to build the website

- Youtube (19), loom (5) and Mux (4) for video


Do you have a link to the source of this info?



Try this one: https://uneven-macaw-bef2.hiku.app/app/

It loads the LLM in the browser, using webgpu, so it works offline after the first load, it's also PWA you can install. It should work on chrome > 113 on desktop and chrome > 121 on mobile.


Read more about why webgpu is required here: https://webllm.mlc.ai/ (that's the project that is used here)


You're good, sorry dropped the /s


Can you at least provide guidance on how to avoid wrongly getting flagged like this?


The underlying model is created for webgpu, not to mention the feasibility of running LLM with webgpu, read more here: https://webllm.mlc.ai/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: