The claim was made that LLMs just parrot back what they've seen in the training data. They clearly go far beyond this and generate completely novel ideas that are not in the training data. I can give ChatGPT extremely specific and weird prompts that have 0% chance of being in its training data, and it will answer intelligently.
> The actual construction of a neural network llm refutes your assertions.
I don't see how. There's a common view that I see expressed in these discussions, that if the workings of an LLM can be explained in a technical manner, then it doesn't understand. "It just uses temperature induced randomness, etc. etc." Once we understand how the human brain works, it will then be possible to argue, in the exact same way, that humans do not understand. "You see, the brain is just mechanically doing XYZ, leading to the vocal cords moving in this particular pattern."
> They clearly go far beyond this and generate completely novel ideas that are not in the training data.
There's a case where this is trivially false. Language. LLMs are bound by language that was invented by humans. They are unable to "conceive" of anything that cannot be described by human language as it exists, whereas humans create new words for new ideas all the time.
I just asked ChatGPT to make up a Chinese word for hungry+angry. It came up with a completely novel word that actually sounds okay: 饥怒. It then explained to me how it came up with the word.
You can't claim that that isn't understanding. It just strikes me that we've moved the goalposts into every more esoteric corners: sure, ChatGPT seems like it can have a real conversation, but can it do X extremely difficult task that I just thought up?
Uh, I believe you're really confused on things like ChatGPT versus LLMs in general. You don't have to feed human language to an LLM for them to learn things. You can feed wifi data waveforms for example and they can 'learn' insights from that.
Furthermore you're thinking here doesn't even begin to explain multimodal models at all.
> The actual construction of a neural network llm refutes your assertions.
I don't see how. There's a common view that I see expressed in these discussions, that if the workings of an LLM can be explained in a technical manner, then it doesn't understand. "It just uses temperature induced randomness, etc. etc." Once we understand how the human brain works, it will then be possible to argue, in the exact same way, that humans do not understand. "You see, the brain is just mechanically doing XYZ, leading to the vocal cords moving in this particular pattern."