The importance of having a human be responsible is about alignment. We have a fundamental belief that human beings are comprehensible and have goals that are not completely opaque. That is not true of any piece of software. In the case of deterministic software, you can’t argue with a bug. It doesn’t matter how many times you tell it that no, that’s not what either the company or the user intended, the result will be the same.
With an AI, the problem is more subtle. The AI may absolutely be able to understand what you’re saying, and may not care at all, because its goals are not your goals, and you can’t tell what its goals are. Having a human be responsible bypasses that. The point is not to punish the AI, the point is to have a hope to stop it from doing things that are harmful.
With an AI, the problem is more subtle. The AI may absolutely be able to understand what you’re saying, and may not care at all, because its goals are not your goals, and you can’t tell what its goals are. Having a human be responsible bypasses that. The point is not to punish the AI, the point is to have a hope to stop it from doing things that are harmful.