Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I honestly don’t think the models are as important as people tend to believe.

I tend to disagree. While I don't see meaningful _reasoning power_ between frontier models, I do see differences in the way they interact with my prompts.

I use exclusively Anthropic models because my interactions with GPT are annoying:

- Sonnet/Opus behave like a mix of a diligent intern, or a peer. It does the work, doesn't talk too much, gives answers, etc.

- GPT is overly chatty, it borderline calls me "bro", tend to brush issues I raise "it should be good enough for general use", etc.

- I find that GPT hardly ever steps back when diagnosing issues. It picks a possible cause, and enters a rabbit hole of increasingly hacky / spurious solutions. Opus/Sonnet is often to step back when the complexity increases too much, and dig an alternative.

- I find Opus/Sonnet to be "lazy" recently. Instead of systematically doing an accurate search before answering, it tries to "guess", and I have to spot it and directly tell it to "search for the precise specification and do not guess". Often it would tell me "you should do this and that", and I have to tell it "no, you do it". I wonder if it was done to reduce the number of web searches or compute that it uses unless the user explicitly asks.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: