Most scientists think a lot about ethics. We adhere to, and constantly work to improve, guidelines for codes of good conduct in our dealings with people and other animals.
And now, according to a new book edited by philosophers Patrick Lin and Keith Abney, and computer scientist George A. Bekey, more of us had better think about the ethics of dealing with robots, too. Robots play increasingly significant roles in medicine, recreation, and the military, among other realms.
Reviewing Robot Ethics in today's issue of the journal Nature, Braden Allenby gives an example of an ethical dilemma involving robot technology:
"Who is responsible if an autonomous military robot kills a group of civilians? The manufacturer, the commander, the operator, or the robot itself?"
Can a machine be moral? Should a robot be held to an ethical standard? Wouldn't any robot's moral code precisely duplicate its human programmer's? Or would it... since some robots can learn?
—You can connect with Barbara on Twitter.