As long as human beings remain capable of randomly killing each other for stupid shit, then we are not ready to create autonomous devices that have the ability to kill humans beings — unless those devices are significantly smarter than human beings and we let them do whatever they want.
Then again, if there were autonomous machines that were significantly smarter than human beings and that could kill us, then it wouldn’t matter much what we thought — they could still pretty much do whatever they wanted.
This notion that humanity has some sort of “wisdom” that gives them an excuse to kill other people on occasion is pure horseshit — we fabricate rationales to do whatever we want. If we want to kill people, we make up reasons — those people were the wrong religion, those people did this or that bad thing, etc. So, the notion that a robot or device wouldn’t possess “human wisdom” seems to me to be a selling point, actually. “Human wisdom” pretty much results in dead people by the thousands.
Compassion would be nice, though. A robot that shows compassion on a consistent basis would be quite bad-ass in a change-the-world kind of way.
I suspect people would destroy it because it made them look like shitheels.
This is the world we live in — where we explore how to make robots that kill other people, but would likely destroy a robo-Christ because we can’t handle how high that bar has been raised.