Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Definitely my experience. I manage context like a hawk, be it with Claude-as-Google-replacement or LLM integrations into systems. Too little and the results are off. Too much and the results are off.

Not sure what Anthropic and co can do about that, but integrations feel like a step in the wrong direction. Whenever I've tried tool use, it was orders of magnitude more expensive and generally inferior to a simple model call with curated context from SerpApi and such.



Couldn't agree more. I wish all major model makers would build tools into their proprietary UIs to "summarize contents and start a new conversation with that base". My biggest slowdown with working with LLMs while coding is moving my conversation to a new thread because context limit is hit (Claude) or the coherent-thought threshold is exceeded (Gemini).


I never use any web interfaces, just hooked up gptel (an Emacs package) to Claude's API and a few others I regularly use, and I just have a buffer with the entire conversation. I can modify it as needed, spawn a fresh one quickly etc. There's also features to add files and individual snippets, but I usually manage it all in a single buffer. It's a powerful text editor, so efficient text editing is a given.

I bet there are better / less arcane tools, but I think powerful and fast mechanisms for managing context are key and for me, that's really just powerful text editing features.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: