There are probably many, but the most glaring one is that LLMs has to write a word every time it thinks, meaning it can't solve a problem before it starts to write down the solution. That is an undeniable limitation of current architectures, it means that the way the LLM answers your question also matches its thinking process, meaning that you have to trigger a specific style of response if you want it to be smart with its answer.
>but I just focus on something and the answer pops into my head.
It's perfectly valid to say "I don't know", because no one really understand these parts of the human mind.
The point here is saying "Oh the LLM thinks word by word, but I have a magical black box that just works" isn't good science, nor is it a good means of judging what LLMs are capable or not capable of.
That's a difficult question to answer, since I must be doing a lot of very different things while thinking. For one, I'm not sure I'm never not thinking. Is thinking different from "brain activity"? We can shut down the model, store it on disk, and boot it back up. Shut down my brain and I'm a goner.
I'm open to saying that the machine is "thinking", but I do think we need more clear language to distinguish between machine thinking and human thinking.
EDIT: I chose the wrong word with "thinking", when I was trying to point out the logical fallacy of anthropomorphizing the machine. It would have been more clear if I had used the word "breathing": When I write I'm breathing, so the machine must also be breathing.
I don’t think that “think” is a wrong word here. I believe people are machines - more complicated than GPT4, but machines nevertheless. Soon GPT-N will become more complicated than any human, and it will be more capable, so we might start saying that whatever humans do when they think is simpler or otherwise inferior to what the future AI models will do when they “think”.
What is it exactly you do when you “think”? And how is it different from what LLM does? Not saying it’s not different, just asking.