You’re missing a couple use cases there. Particularly the ones when these weapons, which can’t refuse an order like a meat sack soldier would, shoot at _you_. I’m not saying “don’t build weapons”, btw. By all means do. I’m saying as far as risk is concerned, this is by far the riskiest direction imaginable
If we’re making them smart enough to decide whether to shoot on their own, why wouldn’t we make them smart enough to refuse an unethical order (e.g. there’s like 80% civilians there, I’m not shooting a rocket at the target)?
Obviously they won’t, but because the problem here is that the humans do not want the machine to question them, it’s only a matter of time before the bad guys have them anyway. I’m inclined to say that it’s better the good guys (at least from my perspective) have them first.