Hacker Newsnew | past | comments | ask | show | jobs | submit | mongrelion's commentslogin

Through my Kagi subscription I get access to quite a few models [1] but I tend to rely on Qwen3 (fast) for quick questions and Qwen3 (reasoning) when I want a more structured approach, for example, when I am researching a topic.

I have tried the same approach with Kimi K2.5 and GLM 5 but I keep going back fo Qwen3.

I also have access to Perplexity which is quite decent to be honest, but I prefer to keep everything in Kagi.

1: https://help.kagi.com/kagi/ai/assistant.html#available-llms


Great idea of inferbench (similar to geekbench, etc.) but as of the time of writing, it's got only 83 submissions, which is underwhelming.

> [...] it's much easier to fine-tune a "general" model into performing some very specific custom task (like classifying text, or translation, etc)

Is this fine-tunning process similar to training models? As in, do you need exhaustive resources? Or can this be done (realistically) on a consumer-grade GPU?


> But are we really at the point yet where people are running local models without knowing what they are running them on..?

I can only speak for myself: it can be daunting for a beginner to figure out which model fits your GPU, as the model size in GB doesn't directly translate to your GPU's VRAM capacity.

There is value in learning what fits and runs on your system, but that's a different discussion.


Apparently there are a few more similar communities like the one from the post

https://tildeverse.org/members/


Pi ships with powerful defaults but skips features like sub-agents and plan mode

Does anyone have an idea as to why this would be a feature? don't you want to have a discussion with your agent to iron out the details before moving onto the implementation (build) phase?

In any case, looks cool :)

EDIT 1: Formatting EDIT 2: Thanks everyone for your input. I was not aware of the extensibility model that pi had in mind or that you can also iterate your plan on a PLAN.md file. Very interesting approach. I'll have a look and give it a go.


I plan all the time. I just tell Pi to create a Plan.md file, and we iterate on it until we are ready to implement.

Agreed. I rarely find the guardrails of plan to be necessary; I basically never use it on opencode. I have some custom commands I use to ask for plan making, discussion.

As for subagents, Pi has sessions. And it has a full session tree & forking. This is one of my favorite things, in all harnesses: build the thing with half the context, then keep using that as a checkpoint, doing new work, from that same branch point. It means still having a very usable lengthy context window but having good fundamental project knowledge loaded.


Check https://pi.dev/packages

There are already multiple implementations of everything.

With a powerful and extensible core, you don't need everything prepackaged.


See my comment in the thread but there is an intuitive extension architecture that makes integrating these type of things feel native.

https://github.com/badlogic/pi-mono/tree/main/packages/codin...


I agree with you, especially with this:

They paid for the access the same as any other.

If anything, this makes them more legit than Anthropic because they are paying for the content, whereas Anthropic just stole *all* the data they got a hold of. So, in this case the Chinese AI labs stand on higher moral ground LOL.


The article touches a bit on how Sega basically lost. There is literally a whole documentary about this: Console Wars, where they go deep into how Sega lost the battle: https://en.wikipedia.org/wiki/Console_Wars_(film)


Hello. I am happy to take this for a spin.

I see that not all models available in my Github subscription are available (all models should be visible).

Further, is it possible to use openrouter with the current implementation? I couldn't figure it out by reading the documentation alone.

Thank you!


This is definitely a cool finding.

Have you investigated more on this topic? like, anything similar in concept that competes with Serena? if so, have you tested it/them? what are your thoughts?


I actually just enhanced my `codescan` project to exceed Serena in some ways

https://github.com/pmarreck/codescan

Essentially zero-install, no MCP, just tell your agent about its CLI, have Ollama running with a particular embeddings model and boom

now I just need to set up Github Actions (ugh) so people can actually download artifacts


@pmarreck, Serena developer here. We invite you to contribute to Serena in order to make it better. Serena is free & open-source, and it already robustly addresses the key issues preventing coding agents from being truly efficient even in complex software development projects (while being highly configurable).

We don't believe CLI is the way to go though, because advanced code intelligence simply cannot be spawned on the fly and thus benefits from a stateful process (such as a language server or an IDE instance).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: