Not at all, the problem is the word "hallucinations" which I kind of people wish would stop using.
They're not doing anything AT ALL different when they "tell the truth" or "lie" or "get it right" or "get it wrong."
They are remixing groups of word chunks based on scanning older groups of word chunks. That's ALL. Most any other description is going to be overreaching anthromorphization.
LLMs cannot lie insofar as they cannot tell the truth. They're remarkably good at predicting what token comes next given a bunch of tokens, but nothing else.
Yes, but it's also generative, so at each time step it will be basing those predictions off of its own recent behavior so it's also chaotically, unpredictably performant in the quality of its predictions, but nothing else.
The only thing horrifying about this situation is the extent to which people are apparently taking these software outputs seriously. Or perhaps the extent to which others are selling the illusion for personal gain.
What's the difference between "getting confused" and "lying" in a predictive model?
Normally lying means conveying a falsehood that you know is a falsehood with the intent to deceive. Both the 'know it's a falsehood' and the 'intent to deceive' are important criteria when asking whether a human was lying or not, and an LLM seems like it cant satisfy those and so can't 'lie'.