NPR logo

When Robots Can Kill, It's Unclear Who Will Be To Blame

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
When Robots Can Kill, It's Unclear Who Will Be To Blame


When Robots Can Kill, It's Unclear Who Will Be To Blame

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript


The Pentagon's research arm has been involved in the development of new technology, from the birth of the Internet to self-driving cars, to the design and deployment of robots. But some researchers worry about the Pentagon's priorities. They're concerned it may be heading toward the creation of autonomous robots designed to kill. NPR's Steve Henn reports.

STEVEN HENN, BYLINE: These days, the Defense Advanced Research Project Agency, or DARPA, is probably best known for its robotics contests.

UNIDENTIFIED MAN: And good morning, everyone, and welcome to the 2013 DARPA Robotics Challenge...

HENN: Ten years ago, a similar contest helped spur the creation of self-driving cars. DARPA's latest robotics challenge was inspired by the Fukushima nuclear disaster. Fear of radiation poisoning kept utility workers at the Fukushima plant from shutting off and effectively cooling reactors faster. Ultimately, three of the plant's six reactors melted down.

BRIAN GERKY: And there is good evidence that if we had been able to send in some kind of robot and had that robot do relatively simple things, simple manual tasks like opening valves, opening doors, getting to control panels, that a lot of the following disaster could have been averted.

HENN: So Brian Gerky at the Open Source Robotics Foundation says DARPA's challenge, its latest challenge, is to build that robot, one that can open doors, move debris, turn a valve, even climb into and drive a conventional car.

UNIDENTIFIED MAN: (Unintelligible) out of the driver's seat (unintelligible) significant strength and dexterity - that really is a challenge for the robots.

HENN: This past December 16, teams of roboticists from all over the world converged on a Miami speedway to compete. And while this may seem like an entirely altruistic enterprise, designing a robot for disaster response, it's not.

PETER SINGER: At the end of the day people need to remember what the D in DARPA stands for. It stands for defense.

HENN: Peter Singer is a senior fellow at the Brookings Institution and author of "Wired for War: The Robotics Revolution and Conflict in the 21st Century."

SINGER: Too often scientists try and kid themselves, act like, well, just because I'm working on this system that's not directly a weapons system, I have nothing to do with war. I remember speaking with a scientist who was funded by a Navy contract and he was working on a robot that would play baseball.

And he said, I don't have anything to do with war. I was like, come on, you know, you think the Navy is funding this because they, you know, want a better Naval Academy baseball team?

HENN: Or is it that tracking and intercepting a fly ball is analogous to tracking a missile? It's actually hard to find a roboticist who hasn't taken some kind of military funding. Illah Nourbakhsh is one of the few. And while Nourbakhsh acknowledges that good could come out of DARPA's push to build a search and rescue robot, he also sees an obvious dual use.

If you set out to build a robot that can drive a regular car, climb a ladder and operate a jack hammer...

ILLAH NOURBAKHSH: That means that that robot can manipulate an AK-47. That means that that robot can manipulate the controls of all the conventional military machines that we have as well.

HENN: Nourbakhsh believes DARPA is pushing roboticists to build machines that can make complex decisions quickly and independently.

NOURBAKHSH: And it's a really interesting boundary to cross.

HENN: Imagine, he says, using...

NOURBAKHSH: Image recognition, where the drone is flying in the air looking down, recognizing people's faces, matching them against a database of known faces on a kill list and then deciding on its own, autonomously, whether it's going to shoot to kill or not.

HENN: Already nations around the world are experimenting with loitering munitions. These are robots that hover over an area until they independently recognize the target they were assigned to destroy. But if a robot like that makes a mistake, who would be responsible? The programmer? The manufacturer? The military commander who launched it on its mission?

RYAN CALO: It forces us to confront whether we really control machines.

HENN: Ryan Calo is a law professor at the University of Washington. He says these kinds of tensions aren't just going to play out in the military, but are sure to crop up whenever we're tempted to allow robots to make complex decisions on their own. Steve Henn, NPR News, Silicon Valley.

Copyright © 2014 NPR. All rights reserved. Visit our website terms of use and permissions pages at for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.