Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nice potshots at UC Berkeley philosophers...the ones from Stanford would have similar objections to the robot's actions


I believe he's referring to Berkeley's Hubert Dreyfus, and perhaps John Searle, who are well-known for their critiques of the prospects for AI.


And to be fair, so far they've been right, no robotic AI remotely similar to this scenario is visible as even distant blip on the horizon.

Their point is entirely fair that robots would have to do deal with a continuously ambiguous world and would lack anything this fable's "general good purpose" module for resolving the ambiguity problems in a touchy-feely way. Of course, the complexity of human interaction wouldn't appear suddenly in a moment of interaction with one drug addict but that would hit and crush any "real world" AI the moment it tried to get out the door.


A real world AI would almost certainly have to learn the rules of it's environment rather than being hard coded with arbitrary human designed rules. Machine learning is getting better and better at doing this.

If we ever got them as intelligent as the robots in this story, we'd have no way of programming them with abstract high level goals (i.e. "do good", or "don't hurt humans", etc.) Except by giving them examples of robots hurting people and robots not hurting people and hoping they infer the pattern we want them to from it.

This is an (extremely) simplified argument for the dangers of AI.


The only "real world AI" example we have is us, human beings ourselves.

Humans manage to be able to both learn from their environment and to learn by being told rules - a person would have a hard time demonstrating intelligence if they weren't able to be instructed in things and so it seems like anything intelligent we construct would have to have those abilities too.

I suppose it's a natural overreaction for people to believe that if intelligence is not just rule-following, it must be not at all rule based. I believe the truth is in the middle.


An intelligence that smart would likely understand what you are saying and what you want. That doesn't mean the AI would want to do what you tell it to do though.

Comparing it to humans, if you tell a human you want them to do something it doesn't mean they will do it even though they understand you.

If we train the AI the same way we do today, it would involve giving it examples of robots doing what they are told and robots failing to do that. That approach would likely fail because of all the possible ambiguities involved in interpreting meaning.

Other approaches, like giving a robot a reward every time it does something right and punishment every time it does something wrong, might result in the robot killing it's master and stealing it's reward/punish button.


An intelligence that smart would likely understand what you are saying and what you want. That doesn't mean the AI would want to do what you tell it to do though.

I haven't yet seen any evidence that a concept like "wanting" or "desire" have any meaning outside the context of humans.

I agree that if we could produce an AI with various blind methods, it would likely be a dangerous thing.

I simply also doubt we could produce an AI in this fashion. I mean, you couldn't train functional human by putting him/her in room with just rewards and punishment.

I would note that even the animals of the natural world are constantly using signs to communicate with each other and other functional mammals receive a good of "training" over time.


>I haven't yet seen any evidence that a concept like "wanting" or "desire" have any meaning outside the context of humans.

Those specific feelings/emotions, no. But AIs do have utility functions, or in the case of reinforcement learning, reward and punishment signals (which itself is essentially a utility function.)

>I simply also doubt we could produce an AI in this fashion. I mean, you couldn't train functional human by putting him/her in room with just rewards and punishment.

Possibly. It's just an example to illustrate how difficult the problem of coding abstract, high level goals into an AI is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: