A Robot That Harms: When Machines Make Life Or Death Decisions : All Tech Considered An artist has designed a robot that purposefully defies Isaac Asimov's law that "a robot may not harm humanity" — to bring urgency to the discussion about self-driving and other smart technology.
NPR logo

A Robot That Harms: When Machines Make Life Or Death Decisions

  • Download
  • <iframe src="https://www.npr.org/player/embed/490775247/491848144" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
A Robot That Harms: When Machines Make Life Or Death Decisions

A Robot That Harms: When Machines Make Life Or Death Decisions

  • Download
  • <iframe src="https://www.npr.org/player/embed/490775247/491848144" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

ARI SHAPIRO, HOST:

Here's a riddle for the digital age. Why did the scientist create a robot that hurts humans - answers on this week's All Tech Considered.

(SOUNDBITE OF MUSIC)

SHAPIRO: The science fiction author Isaac Asimov famously created the Laws of Robotics. The first one is that a robot should never harm a human being. But one artist-slash-roboticists has inverted that law to provoke discussion about a future where robots may have the power to make choices about human life. NPR's Laura Sydell reports.

LAURA SYDELL, BYLINE: MIT-trained roboticists and artist Alexander Reben admits his robot has no practical purpose. It's designed to prick human fingers.

ALEXANDER REBEN: It hurts a person obviously in the most minimal way with this needle. And it makes a decision in a way that me as the creator cannot predict. So the idea is that when you put yourself near this robot, it will decide whether or not to hurt and injure you. There's no human in the loop of this decision.

SYDELL: It's not a very elaborate robot, just a robotic arm on a platform. It's smaller than a human arm but shaped a little like the arm of one of those excavators they use for construction. Instead of a shovel on the end, there's a pin.

REBEN: And you put your hand near the robot. And it senses you. Then it goes through an algorithm to decide whether or not it's then going to put the needle through your finger.

SYDELL: I dared to put my finger beneath the arm. It swings past me several times and then - oh.

The waiting is the hardest part. And in case you were wondering, each needle is sterilized. Reben says the point of this robotic sculpture is to get people to think about a world in which programmed machines like self-driving cars make the decision to hurt a human.

For example, what would happen if a self-driving vehicle must decide whether to drive you into a tree or hit a group of pedestrians?

REBEN: The answer might even be that, well, these machines are going to make decisions so much better than us, and it's not going to be a problem. They're going to be so much more ethical than a human could ever be.

SYDELL: But what about the people who actually get into those cars?

REBEN: If you get into a car, do you have the choice to not be ethical (laughter)?

SYDELL: And people want to have that choice. A recent poll by MIT Media Lab found that half of the people in the survey would buy a driverless car that put the highest protection on passenger safety, but only 19 percent said they'd buy a car program to save the most lives.

The popular science fiction author Isaac Asimov inspired Reben's work. Asimov even made up laws for robots. One of his stories, "I, Robot," was made into a film starring Will Smith. Smith plays an emotional cop chasing a killer robot. Here's a scene where he's speaking with a roboticist played by Bridget Moynahan.

(SOUNDBITE OF FILM, "I, ROBOT")

BRIDGET MOYNAHAN: (As Susan Calvin) A robot cannot harm a human being, the first robotics.

WILL SMITH: (As Del Spooner) Yeah, I know. I've seen your commercials. But doesn't the second law state that a robot has to obey any order given by a human being? What if it was given an order to kill?

SYDELL: Or hurt someone. Asimov's laws for robots are often cited by scientists in the field as a kind of inspiration and talking point as we move towards a world of increasingly sophisticated machines. And Asimov's stories often show how no matter how hard humans try to program robots not to harm people, complicated situations arise.

REBEN: The ability to even program these fictional laws into a robot is very difficult. And what they actually mean when you really try to analyze them is quite gray. It's a quite fuzzy area.

SYDELL: For example, should a programmer design a robot that will never hurt a person even if doing that would save another life? Reben says the point of making his robot is to put something in the world now before machines have those powers in, say, self-driving cars.

REBEN: If you see a video of a robot making someone bleed, all of a sudden it taps into this viral nature of things. And now you really have to confront it. It's something different.

SYDELL: And you can go online and watch Reben's robot and ponder the power that our future overlords will have over us someday. Laura Sydell, NPR News.

Copyright © 2016 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.