Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Don't be sad. Before LLMs, they would have copied from a deprecated 5 year old tutorial or a fringe forum post or the defunct code from a stackoverflow question without even looking at the answers.


That was still better, because you could track down errors. Other people used the same code. Chatgpt will just make up functions and methods. When you try to troubleshoot no one of course has ever had a problem with this completely fake function. And when you tell chatgpt it's not real it says "You're right, str_sanitize_base64 isn't a function" and then just makes up something else new.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: