If you want to equate "emergence" with "making something up", then fine, I guess. I'm just not sure what what you can possibly conclude from that equivalence.
That's why I'm not big on using word "hallucination" for this. It is really our subjective experience. Our 'view' of the outside world is constructed in the brain from the senses, like what is 'hot', or 'pressure', or 'blue', these don't exist in the world, we build them internally. Some people call it a 'hallucination', others 'subjective reality', or 'mental map'.
The problem is, our brain is constructing reality, it can 'make things up', this has been shown in countless studies. And now that we have AI doing it too, it seems like easy association to make.
>The problem is, our brain is constructing reality, it can 'make things up', this has been shown in countless studies. And now that we have AI doing it too, it seems like easy association to make.
I'm with you until this pair of sentences because I believe you are confusing ontological subjectivity (which is fine, for our purposes) with epistemic subjectivity (which isn't).
Hallucination is an ontologically subjective phenomenon, requiring an experiencer to experience it. "Making shit up" similarly implies an "intentional stance" (Dennett), wherein the AI agent is constructing a world model as it interacts with the world. That isn't required to arrive at a "stochastic parrot" that spouts nonsense.
"Generating nonsense" is closer to what the AI is doing. It's generating text that we are unable to interpret, not revealing its errors of reasoning through its speech. It's not reasoning; it's generating tokens.
tl;dr: Ontological vs epistemic subjectivity. There's no reason to affirm AI is hallucinating because there's no reason to affirm its experiencing anything.
(Please forgive my multiple edits; it's a clumsy-words kind of day.)
I was speaking as in 'ontologically subjective', the experiencer. Like, 'what is it like to be a bat'.
The problem is, if a brain neural net has experiences, does a computer neural net also have experiences.
I don't like 'hallucinate' as description because everyone gets confused with mental illness. But it is more like the old Buddhist fable, mistaking a 'rope' for a 'snake', doing a double take, and then realizing it is just a 'rope'.
Humans can make mistakes based on their incomplete model of the inputs, and thus 'hallucinate' an answer just like an LLM might.
But I'm more in the camp that philosophical zombies are impossible. That by the time an AI can completely mimic a human, they would also be having some internal experience of some kind like a human, maybe not identical but something. But of course, it can't be proven either way, for AI or Humans. We only assume other minds experience reality like we do, but even that can't be 'proven'.