Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems a stretch to call it "inevitable". "Inevitable given the current architecture without modifications" at most.

Also, I'm missing a section on how (if) human brains manage to avoid hallucinations in this.

Also, it doesn't have to never hallucinate, it just has to hallucinate less than we do.



Because we have parts of our brain that supervise other parts of our brain and evaluate its output.

For example: if you smoke pot and get paranoid, it's because pot dials back the work of the part of your brain that prunes thought paths that are not applicable. Normally, paranoid thoughts do not make sense, so they are discarded. That's also why you're more 'creative' when you smoke pot, less thought paths are pruned and more stuff that doesn't quite make sense gets through. Or thoughts that overly focus on some details get through, which are normally not required.

Our brains are inherently "higher level", current AI is hopelessly simplistic by comparison.


Humans do hallucinate, there's lots of literature on how memories are distorted, we see and hear things we want to see and hear, etc.

The particular pathology of LLMs is that they're literally incapable of distinguishing facts from hallucinations even in the most mundane circumstances: if a human is asked to summarize the quarterly results of company X, unlike an LLM they're highly unlikely to recite a convincing but completely fabricated set of numbers.


And yet if you ask a random person at a rally about their favourite cause of the day, they usually spew sound bites that are factually inaccurate, and give all impressions of being as earnest and confident as the LLM making up quarterly results.


I think that case is complicated at best, because a lot of things people say are group identity markers and not statements of truth. People also learn to not say things that make their social group angry with them. And it's difficult to get someone to reason through the truth or falsehood of group identity statements.


I guess it's similar to what Chris Hitchens was getting at, you can't reason somebody out of something they didn't reason themselves into.


According to Buddhist philosophy, our whole identity is a hallucination :) I kind of concur.


Username checks out :)


I'll honestly take this is a compliment.


Perhaps solving hallucinations at the LLM level alone is impossible, hence the inevitability. I reckon that lots of human “hallucination” is simply caught by higher-level control loops operating over the output of the generative mechanism. Basically, our conscious mind says, “nah, that doesn’t look right” enough that most of the time most of us don’t “hallucinate”.


So this implies that instead of spending resources on training bigger and bigger LLMs, AI practitioners need to shift focus to developing “ontological” and “epistemological” control loops to run on top of the LLM. I suspect they already have rudimentary such control loops. In a sense, the “easier” part of AI may be a largely “solved” problem, leaving the development of “consciousness” to be solved, which is obviously the hard part.


When I studied NLP, Language Models were only one part of a chatbot system used to handle language input and output. The "internal" reasoning would be handled by a knowledgeable representation systems. I guess that's the closest part to a true general AI.

The first order predicate logic we studied had alot of limitations in fully expressing real knowledge, and developing better models delves deep into the foundations of logic and mathematics. I would imagine this is a problem that has less to do with funding than requiring literal geniuses to solve. And that goes back into the pitfalls of the AI winters.


Our brains are very modular. I'd not be surprised at all if a similarly modular structure would turn out to be the next big step for LLMs.


Or catch itself that it's hallucinating? I feel like humans would do that a fair bit.

How often do we sit somewhere thinking about random scenarios that won't ever happen and are filled with wild thoughts and sometimes completely out of the world situations.. then we shake our heads and throw away the impossible from that thought train and only use what was based in reality




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: