Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like the key points are at the bottom.

> Claude 3 Opus and Sonnet are both slower and more expensive than OpenAI’s models. You can get almost the same coding skill faster and cheaper with OpenAI’s models.

GPT-4 already isn't cheap. Certainly for code tasks I have seen cheaper models also be very capable, I wonder how those stack up here.

> Claude 3 has a 2X larger context window than the latest GPT-4 Turbo, which may be an advantage when working with larger code bases.

No comment here

> The Claude models refused to perform a number of coding tasks and returned the error “Output blocked by content filtering policy”. They refused to code up the beer song program, which makes some sort of superficial sense. But they also refused to work in some larger open source code bases, for unclear reasons.

Depending on how often this occurs, this basically can be a dealbreaker entirely.

> The Claude APIs seem somewhat unstable, returning HTTP 5xx errors of various sorts. Aider automatically recovers from these errors with exponential backoff retries, but it’s a sign that Anthropic made be struggling under surging demand.

Considering they are using openrouter I'd say it might as well be related to that. Certainly if Anthropic offers Claude in a different format and openrouter is doing conversions.



> Claude 3 has a 2X larger context window than the latest GPT-4 Turbo

I feel this is selling claude-3 short. Not only is the context window double the size that of GPT-4's, recall over long contexts is significantly better.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: