> advocating this is a likely outcome of creating spam generators is laughable
They're used as spam generators because they're cheap.
The quality in many fields is currently comparable to someone in the middle of a degree in that field, which makes the quoted comparison a bit like the time Pierre Curie stuck a lump of radium on their arm for ten hours to see what it would do. I can imagine him reacting "What's that you say? A small lump of rock in a test tube might give me aplastic anemia*? The idea is laughtable!", except he'd probably have said that in Polish or French.
Even the limits of current models, even if we are using those models to their greatest potential (we're probably not), isn't a safety guarantee: there is no upper bounds to how much harm can be done by putting an idiot in charge of things, and the Peter Principle applies to AI as well as humans, as we're already seeing AI being used for tasks they are inadequate to perform.
* he died from a horse drawn cart, Marie Curie developed aplastic anemia and he likely would have too if not for the other accident getting him first.
Bonus irony: the general idea he had in regards to this, to use radiation to treat cancer, is correct and currently in use. They just didn't know anything like enough to do that at the time.
> They're used as spam generators because they're cheap.
No, the current fade of IA (LLM) are text generators. Very good, but nothing more than that.
> there is no upper bounds to how much harm can be done by putting an idiot in charge of things
Which is the not an AI problem. An AI may kill people indirectly in a setup like emergency services chatbot and a bad decision is taken, but it certainly couldn't roam the street with a kalachnikov killing people randomly or stabbing children (and if that ever happens politicians will say this has nothing to do with AI). The proponents of "AI can kill us all" can't write a single likely and non-contrived example of how that could happen.
> No, the current fade of IA (LLM) are text generators. Very good, but nothing more than that.
That doesn't address the point, and is also false.
Transformers are token generators, which means they can also do image and sound, and DNA sequences.
But even if they were just text, source code is "just text", laws are "just text", contract documents are "just text".
They have been used to control robots, both as input and output.
> Which is the not an AI problem
"Good news, at least 3,787 have died and it might be as bad as 16,000!"
"How is that good news?"
"We're an AI company, and it was our AI which designed and ran the pesticide plant that exploded in a direct duplication of everything that went wrong at Bohpal."
"Again, how is this good news?"
"We can blame the customer for using our product wrong, not our fault, yay!"
"I'm sure the victims and their family will be thrilled to learn this."
> it certainly couldn't roam the street with a kalachnikov killing people randomly or stabbing children
It can when it's put in charge of a robot body.
There's multiple companies demonstrating this already.
Pretending that AI can't be used to control robots is like saying that nothing that happens on the internet has any impact on real life.
Fortunately the AI which have been given control of robot bodies so far aren't doing that — want to risk your life with the humanoid robot equivalent of the Uber self driving car?
> The proponents of "AI can kill us all" can't write a single likely and non-contrived example of how that could happen.
Anything less would be a thing we can trivially prevent.
It's not like "dig up all the fossil fuels and burn them despite public protest about climate change and the existence of alternatives, and suing the protesters with SLAPP suits so we can keep doing it because it's inconvenient to believe the science and even if it did the consequences won't affect us personally", doesn't sound contrived.
And that's with humans making the decisions, humans whose grandkids would be affected.
They're used as spam generators because they're cheap.
The quality in many fields is currently comparable to someone in the middle of a degree in that field, which makes the quoted comparison a bit like the time Pierre Curie stuck a lump of radium on their arm for ten hours to see what it would do. I can imagine him reacting "What's that you say? A small lump of rock in a test tube might give me aplastic anemia*? The idea is laughtable!", except he'd probably have said that in Polish or French.
Even the limits of current models, even if we are using those models to their greatest potential (we're probably not), isn't a safety guarantee: there is no upper bounds to how much harm can be done by putting an idiot in charge of things, and the Peter Principle applies to AI as well as humans, as we're already seeing AI being used for tasks they are inadequate to perform.
* he died from a horse drawn cart, Marie Curie developed aplastic anemia and he likely would have too if not for the other accident getting him first.
Bonus irony: the general idea he had in regards to this, to use radiation to treat cancer, is correct and currently in use. They just didn't know anything like enough to do that at the time.