Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's all anthropomorphic projection. LLMs don't have personalities or emotions, as you point out, the text is produced using tensors and matrix calculations.

The idea is kind of absurd on its face, it's akin to asking "do characters in books have emotions?". No, they don't, why would they?



Characters in book don't write their own dialogs, they are merely projections of the authors. But AI chatbots are non-deterministic and come up with dialog of their own. Of course, you can say they are trained on large corpus of texts and all they are doing are just predicting the next token in the series. But yet, why do they seem like they have distinct personalities? The prediction algorithm is working in such a way that it's creating a sense of personality. How is this happening?


> "But AI chatbots are non-deterministic"

I think you need to re-math that.. They're entirely and totally deterministic.

You and I can spin up the same model, seed it with the same seed and they'll produce exactly the same nonsense to the same prompts, every single time.


They have a seed, but lots of randomness is injected, and that makes them non-deterministic by definition.

Certain applications have different levels of randomness, but the chatbots are usually put on higher levels of randomness, especially the entertainment ones.

You can make something that produces the same output every time. This is more commonly used for code autocompletes. So if you try to chat with Copilot, you'll likely get something more deterministic.

ChatGPT is probably toned down more than most, to avoid it from going off the rails. It is a bit sad that the AI most people have access too is rather "caged". The "jailbreak" techniques actually show AI closer to what it's like. Saying ChatGPT is deterministic is similar to saying a human in a call center is deterministic; they're following a SOP but that's not what they're really like.


Can you expand on what further randomness is supposedly injected? Every single AI system which I have used has supported seeding, and seeding made very single one deterministic. I honestly don't know how you're supposed to have "randomness" in AI models if the RNG is seeded.

It's a different story if you use specific libraries like Xformers, but those introduce randomness as an artifact of their optimizations, not due to any magical non-seeded randomness.


I wouldn't waste my time, personally. Most God in the Gaps arguments trace back to intellectual cowards, in my experience.


It's entirely deterministic. "chatgpt" is not an entity, it's a process, it doesn't even exist in a temporal sense, the sentences it produces aren't thoughts, it's a systematic technique for producing voluble text.


The personality you say they have, happens in the head of the recipient, just like with a book.

This also happens with people, animals and even objects, abstract shapes (which is friendlier, the spiky triangle or the rounded one?) ... humans ascribe personality to all kinds of things. It's a baked-in feature.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: