Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the contrary, the parallels between the peculiarities of LLMs and various aspects of human cognition seem very striking to me. Given how early we are in figuring out what we can accomplish with LLMs, IMO the appropriate epistemic stance is to not reach any unequivocal conclusions. And then my personal hunch is that LLMs may be most of the magic, with how they're orchestrated and manipulated being the remainder (which may take a very long time to figure out).


I think it's just that I understand LLMs better than you, and I know that they are very different from human intelligence. Here's a couple of differences:

- LLMs use fixed resources when computing an answer. And to the extent that they don't, they are function calling and the behaviour is not attributable to the LLMs. For example when using a calculator, it is displaying calculator intelligence.

- LLMs do not have memory, and if they do it's very recent and limited, and distinct from any being so far. They don't remember what you said 4 weeks ago, and they don't incorporate that into their future behaviour. And if they do, the way they train and remember is very distinct from that of humans and relates to it being a system being offered as a free service to multiple users. Again to the extent that they are capable of remembering, their properties are not that of LLMs and are attributable to another layer called via function calling.

LLMs are a perception layer for language, and perhaps for output generation, but they are not the intelligence.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: