Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is a horrifying answer, if you think about it. To suggest "it's not getting confused, it's just lying" without being able to determine why?


Not at all, the problem is the word "hallucinations" which I kind of people wish would stop using.

They're not doing anything AT ALL different when they "tell the truth" or "lie" or "get it right" or "get it wrong."

They are remixing groups of word chunks based on scanning older groups of word chunks. That's ALL. Most any other description is going to be overreaching anthromorphization.


LLMs cannot lie insofar as they cannot tell the truth. They're remarkably good at predicting what token comes next given a bunch of tokens, but nothing else.


Yes, but it's also generative, so at each time step it will be basing those predictions off of its own recent behavior so it's also chaotically, unpredictably performant in the quality of its predictions, but nothing else.


The only thing horrifying about this situation is the extent to which people are apparently taking these software outputs seriously. Or perhaps the extent to which others are selling the illusion for personal gain.


What's the difference between "getting confused" and "lying" in a predictive model?

Normally lying means conveying a falsehood that you know is a falsehood with the intent to deceive. Both the 'know it's a falsehood' and the 'intent to deceive' are important criteria when asking whether a human was lying or not, and an LLM seems like it cant satisfy those and so can't 'lie'.


Absolutely none whatsoever and to consider otherwise is to really fundamentally misunderstand the whole thing by overly "humanizing" them.


I don't know where you're getting "confused" from. This isn't about some subtle semantic distinction between a machine being confused vs lying.

The original submission is claiming that user data is leaking between sessions. That would be a huge privacy and security problem.l, if true.

And in contrast to that, a LLM doing pretty much what it's supposed to be doing is both more likely and, well, not a problem at all.

Nothing in the submitted link suggests the former. It is a bunch of people crying wolf with no compelling evidence.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: