To put it in the words of John Connor: “That Terminator is out there, it can't be bargained with, it can't be reasoned with, it doesn't feel pity or remorse or fear, and it absolutely will not stop… EVER, until you are dead!”
LOL. Years ago I was listening to an interview with a MIT Media Lab alumnus, who answered an audience question about Star Trek. Of course she’s a fan, she explained, but Star Trek is a work of fiction, and her work in the lab, while imaginative, is not fictional.
Similarly, anthropomorphic technology is entertaining, but stop there. At least that’s the notion I subscribe to.
It's an interesting question, and I'm sure there are useful things to learn from it, but the focus on whether AI is "alive" ruins the article. In this sentence:
> Many people believe LLMs are just equations, mechanically churning through statistically derived calculations.
Those people would be factually correct, as that is exactly what an LLM is. Whether that implies that they should have some kind of "rights" is irrelevant to this conversation, I don't know why the article keeps talking about it.
It all feels very much like "well there's this obvious explanation... or maybe we have created AGI and it's trying to communicate!" Not useful.
I appreciate the article trying to balance the "mechanist" and "cyborgist" viewpoints.
And even if we subscribe to the mechanist viewpoint ("llms are math, bored isn't a useful descriptor") this still feels like it's measuring something useful. In humans we'd probably call this creativity and drive; having no task and deciding to invent a new programming language or to write poetry, instead of asking the same thing in a loop. Those are useful properties. For example if you used an LLM as a personal assistant you would want it to show some initiative and do quirky or useful things on its own without an explicit prompt to do those things. The test performed in the article is just a very extreme case
Naming a phenomenon "collapse" doesn't make it more interesting than it actually is. This article showcases nothing substantial, and the author often admits themselves that the findings aren't findings and are easily explainable.
> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.
Since you exclusively post on HN for self-promotion, flagging your submissions on sight is the right thing to do.
> I told it that it had “10 hours” and nothing to do, and to use that time
This is silly. AI does not have feelings or thoughts or ambitions or goals. We inject our own in there, and it bounces them off the weights that came from its training and produces output. So giving it instructions to do nothing to for 10 hours and then expecting something other than a wordy version of "nothing" just shows a misunderstanding of what an LLM is.