I’m sorry… does this paper just point out that LLMs by definition are not as good at holding data as a direct database? Cause A) duh and b) who cares, they’re intuitive language transformers, not knowledge models.
Maybe I’m missing something obvious? This seems like someone torturing math to imply outlandish conclusions that fit their (in this case anti-“AI”) agenda.
Maybe I’m missing something obvious? This seems like someone torturing math to imply outlandish conclusions that fit their (in this case anti-“AI”) agenda.