AUDIE CORNISH, HOST:
OK, provocative question for you that's come up in sci-fi and real life. Should the world allow the creation of killer robots capable of acting on their own? Representatives of more than 80 countries gathered in Geneva this month to think through that question, and they emerged with a recommended.
The key U.N. body that sets norms for weapons of war should put killer robots on its agenda. NPR's Eyder Peralta reports on the contentious debate.
EYDER PERALTA, BYLINE: We start this story in a decidedly less-scary place.
SEAN LUKE: All right, so here's our back room.
PERALTA: There's little baby robots.
LUKE: There's little baby robots everywhere.
PERALTA: That's Sean Luke. He's showing me around the autonomous robotics lab at George Mason University in Virginia. Pretty much everywhere you look, there are robots and robot parts. And then there's this little guy.
LUKE: Now, we call these the FlockBots.
PERALTA: Luke and his team hope that someday, these robots work like ants in teams of hundreds to, for example, build houses or help search for survivors after a disaster. It doesn't take much imagination to conjure a future when a swarm of those robots are used on a battlefield.
Luke says that's because robotics and computer science are such new fields, there are few ethical standards to guide such lethal uses.
LUKE: One of the challenges is that we don't know exactly what ethical standards we need. And because the field is moving faster than we thought it was, we're going to need them sooner than we thought we did.
PERALTA: It was that urgency that led Human Rights Watch and Harvard Law School's International Human Rights Clinic to issue a report calling for a complete ban on autonomous killer robots. Via Skype, Harvard's Bonnie Docherty, who wrote the report, says the time to consider that ban is now because it could be a matter of years, not decades, before humanity crosses what she calls a moral threshold.
BONNIE DOCHERTY: They've been called the third revolution of warfare after gunpowder and nuclear weapons. So they would completely alter the way wars are fought in ways we probably can't even imagine.
PERALTA: She says killer robots could start an arms race and also obscure who's held responsible for war crimes. But above all, she says, there's also the issue of basic human rights.
DOCHERTY: It would undermine human dignity to be killed by a machine that can't understand the value of human life.
PAUL SCHARRE: What we're talking about is Cylons and Terminators. What we're talking about is something out of science fiction. Robots run amok on their own - I don't think anybody wants that.
PERALTA: That's Paul Scharre who runs a program on ethical autonomy at the Center for a New American Security. He says reality is actually much more complicated than science fiction.
(SOUNDBITE OF COMPUTING TONES)
PERALTA: What you're hearing is a promotional video from Lockheed Martin, which is developing a long-range anti-ship missile for the U.S. military. The LRASM, as it's known, can lose contact with its human minders, yet still scour the sea with its sensors, pick a target and slam into it.
(SOUNDBITE OF EXPLOSION)
SCHARRE: It sounds simple to say things like machines should not make life or death decisions. But what does it mean to make a decision? Is my Roomba making a decision when it bounces off the couch and wanders around? Like, does a land mine make a decision? Does a torpedo make a decision?
PERALTA: Scharre helped write U.S. policy on killer robots, and he likes where they ended up. The U.S. requires a high-ranking defense official to approve unusual uses of autonomous technology. And it also calls for those systems to always keep, quote, "appropriate levels of human judgment over the use of force." Proponents of a ban say that policy leaves too much wiggle room.
They advocate that all military weapons maintain meaningful human control. Georgia Tech's Ron Arkin, one of the country's leading robo-ethicists thinks that argument is important. But he says we should not overlook the potential benefits of killer robots.
RON ARKIN: They can assume far more risk on behalf of a noncombatant than any human being in their right mind would. They can potentially have better sensors to cut through the fog of war. They can be designed without emotions, such as anger, fear, frustration, which causes human beings, unfortunately, to err.
PERALTA: Arkin says robots could become a new kind of precision-guided weapon. They could be sent into an urban environment, for example, to take out snipers. He says that's probably far into the future. But what he knows right now is that too many innocent people are still being killed in war.
ARKIN: We need to do something about that, and technology affords one way to do that. And we should not let science fiction cloud our judgment in terms of moving forward.
PERALTA: Arkin says one day, killer robots could be so precise that it might be inhumane not to use them. The U.N. group is set to meet in December to decide whether to formally start developing new international law on the issue. Eyder Peralta, NPR News.
NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.