I find it interesting that this kind of "animal intelligence" is still so far away, while LLMs have become so good at "human intelligence" (language) that they can reliably pass the Turing Test.
I think that the LLMs we have today aren't so much artificial brains as they are artificial brain organs, like the speech center or vision center of a brain. We'd get closer to AGI if we could incorporate them with the rest of a brain, but we still have no idea how to even begin building, say, a motor cortex.
You're absolutely right, and reflecting on it is why the article is horribly wrong. Humans are multimodal—they're ensemble models where many functions are highly localized to specific parts of the hardware. Biologically these faculties are "emergent" only in the sense that (a) they evolved through natural selection and (b) they need to be grown and trained in each human to work properly. They're not at all higher-level phenomena emulated within general-purpose neural circuitry. Even Nature thinks that would be absurdly inefficient!
But accelerationists, like Yudkowskites, are always heavily predisposed to believe in exceptionalism—whether it's of their own brains or someone else's—so it's impossible to stop them from making unhinged generalizations. An expert in Pascal's Mugging[1] could make a fortune by preying on their blind spots.
The brain is not a statistical inference machine. In fact humans are terrible at inference. Humans are great a pattern matching and extrapolation (to the extent it produces a number of very noticeable biases). Language and vision is no different.
One of the known biases of the human mind is finding patterns even when there are none. We also compare objects or abstract concept with each other even when the two objects (or concept) have nothing in common. With our human brain we usually compare it to our most advanced consumer technology. Previously this was the telephone, then the digital computer, when I studied psychology we compared our brain to the internet, and now we compare it to large language models. At some future date the comparison to LLMs will sound as silly as the older comparison to telephones does to us.
I actually don‘t believe AGI is possible, we see human intelligence as unique, and if we create anything which approaches it we will simply redefine human intelligence to still be unique. But also I think the quest for AGI is ultimately pointless. We have human brains, we have 8.2 billion of them, why create an artificial version of a something we already have. Telephones, digital computers, the internet, and LLMs are useful for things that the brain is not very good at (well maybe not LLMs; that remains to be seen). Millions of brains can only compute pi to a fraction of the decimal points which a single computer can.
>why create an artificial version of a something we already have
Why build a factory to produce goods more cheaply? Because the rich get richer and become less reliant on the whims of labor. AI is industrialization of knowledge work.
I think that the LLMs we have today aren't so much artificial brains as they are artificial brain organs, like the speech center or vision center of a brain. We'd get closer to AGI if we could incorporate them with the rest of a brain, but we still have no idea how to even begin building, say, a motor cortex.