Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is an excellent theory IMO. It isn't that the AI has actually gotten much worse, it's that the novelty has worn off and they are finally starting to notice all of the repetitious patterns it has and mistakes it makes — the stuff people like me who never bought into the AI hype to begin with noticed from the start — but instead of realizing that maybe their initial impressions of the capabilities of large language models were wrong or based on partial information they are taking the Mandela effect route and just insisting that something outside them has fundamentally changed.


Pretty sure this is going on to some degree. It seems like there should be some kind of regression testing possible on these systems to definitively prove these claims, rather than these anecdotal stories that seem to rarely ever come with concrete examples.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: