Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m sorry… does this paper just point out that LLMs by definition are not as good at holding data as a direct database? Cause A) duh and b) who cares, they’re intuitive language transformers, not knowledge models.

Maybe I’m missing something obvious? This seems like someone torturing math to imply outlandish conclusions that fit their (in this case anti-“AI”) agenda.



It at least disproves LLMs from being 'god models'. They will never be able to solve every problem perfectly.


Humans aren't God models either. The goal is to get this thing to the level of a human. God like levels are not possible imo.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: