A reader disagrees about the requirement of self-awareness:
I’m less worried about AI / sentient / self-aware robots, than just autonomous killing machines of any kind. Real AI is a real concern, but a lot further off. A machine that can operate without human control and decide to kill or not to kill a target is a lot closer — think autonomous “drone”. Once those get cheap enough, we’re in real trouble.
Could be. I hope we don’t have an actual resolution to this dispute, in all honesty.