Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But what does this have to do with reasoning? Yes, LLMs are not knowledge bases, and seeing people treat them as such absolutely terrifies me. However, I don’t see how the fact that LLMs often hallucinate “facts” is relevant to a discussion about their reasoning capabilities.


"Hallucinating a fact" that isn't in the training set and is also illogical, is exactly what a failure to reason correctly looks like.


Reasoning involves making accurate inferences based on the information provided in the current context, rather than recalling arbitrary facts from the training data.


Yes, that's what I said. The whole point of hallucinations is that they aren't "arbitrary facts recalled from the training data". They represent attempts to synthesize (i.e., infer) new facts. But because the inferences are not accurate, and because the synthesis process is not sound, the attempt cannot be called reasoning.

It is equally possible to "reason" about things you already know, as about things you've just been told. In fact, the capacity to speculatively, without prompting attempt such reasoning is a big part of cognition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: