I dont think that's the core of the objection at all. I've never seen it made by people pushing the idea that AGI is impossible, just that AI approaches like LLM are a lot more limited than they appear - basically that most of the Intelligence they exhibit is that found in the training data.
But in which way isn't most of our Intelligence not what is from training data?
1st evolutionary algorithm and then the constant input we receive from the World being the training data and we having reward mechanisms rewiring our neural networks based on what our senses interpret as good?
As someone teaching their five year old to read, I think people way underestimate the amount of training data the average human child gets. And perhaps, since we live in a first world country with universal education, and a very rich one at that, many people have not seen what happens when kids don't get that training data.
It's also not just the qualia of sensation, but also that of the will. We all 'feel' we have a will, that can do things. How can a computer possibly feel that? The 'will' in the LLM is forced by the selection function, which is a deterministic, human-coded algorithm, not an intrinsic property.
In my view, this sensation of qualia is so out-there and so inexplicable physically, that I would not be able to invalidate some 'out there' theories. If someone told me they posited a new quantum field with scalar values of 'will' that the brain sensed or modified via some quantum phenomena, I'd believe them, especially if there was an experiment. But even more out there explanations are possible. We have no idea, so all are impossible to validate / invalidate as far as I'm concerned.