Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't know. LLMs are great at writing code; but you have to have the right ideas to get decent output.

I spend tons of time handholding LLMs--they're not a replacement for thinking. If you give them a closed-loop problem where it's easy to experiment and check for correctness, then sure. But many problems are open-loop where there's no clear benchmark.

LLMs are powerful if you have the right ideas. Input = output. Otherwise you get slop that breaks often and barely gets the job done, full of hallucinations and incorrect reasoning. Because they can't think for you.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: