Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The future timeline in Terminator does involve something like an AI making "a billion more robots [to] take over the world."

Yes. That's distinctly different from Skynet iterating on itself a billion times to make itself smarter, which AFAIK (I'm not up to date with full Terminatorverse, but then, most people aren't either), isn't something that happened in that story.

> The popularity of that and similar sci-fi makes that claim that someone has never encountered it hard to believe.

Again, there's very little in mass-market sci-fi of what we're discussing here. And most people, including many in tech, have a hard time wrapping their heads around the idea of a feedback loop, so no, I don't think it's something readily available from mass-market sci-fi.

(But the more niche, better thought-out works, will teach you feedback loops, and this is just one of the ways recursive self-improvement becomes an obvious idea.)

> So how has that been going? Those things should probably be labeled "science fiction."

Eugenics? We had to ban it and create such a strong cultural (and legal) repulsive field around it, that it impedes biotech and medical research.

Designer babies? Weren't attempts made in China recently? And in the West, we're already correcting congenital defects, so all in all, it's less "science fiction", and more "science someone is going to apply soon, if they haven't already".

> How do you know it would have an easier job optimizing itself than humans have?

Because it was created by us, using processes and media that are strongly optimized for malleability. Software, algorithms, digital data, optimization models. All well-defined (and comprehensible to an AGI, by definition) - unlike our own minds, which were not made by us but by a dumb, random process, and that the brains are made of stupidly complex nanotech instead of simple transistors is not helping.

Also because the kind of models we're now worried about gain capability through an optimization process that's open-ended, and limited only by availability of training data and compute. So if e.g. a successor of GPT-4 were to become AGI, it would be set up for recursive self-improvement from day one.

> How do you know there isn't some fundamental contradiction in the concept of "superintelligence" that these fantasies are based on? Or even just some practical resource limits that makes the fantasy impossible?

Maybe, but what makes you think this is the case? We know of some fundamental limits to compute, but we're very, very far from hitting them. Otherwise, I don't know of anything that would put a cap on intelligence at around human level. Remember: by the very nature of evolution, we're the dumbest possible beings capable of learning and building a technological civilization. There may be better brain designs than ours, but ours "took off", and we took over the world.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: