Can Unmanned Robots Follow The Laws Of War?
NEAL CONAN, host:
This is TALK OF THE NATION. I'm Neal Conan in Washington.
A drone flying above the Afghanistan-Pakistan border knows exactly where it is, and if it's been programmed properly, it also knows that the building that group of armed men just ran into is a school. And even if its human operator wants to fire a missile, the drone can refuse.
As technology improves, robots will take bigger parts on the battlefield and make more decisions about what's a legitimate target and what isn't, which raises all kinds of questions. How good is good enough? Who's responsible for inevitable mistakes? Can you program robots to follow the rules of war? Should there be a law to ban machines that can pull the trigger on their own?
Later in the program, caterpillars, spider webs, sunspots, what's your guide to this winter's weather? You can email us now, firstname.lastname@example.org.
But first, robotics, artificial intelligence, ethics and the future of combat. What happens when you remove humans from the battlefield? 800-989-8255 is the phone number. The email address, email@example.com. You can also join the conversation on our website. That's at npr.org. Click on TALK OF THE NATION.
We begin with Patrick Lin, who directs the Ethics and Emerging Sciences Group at California Polytechnic State University, where he's also an assistant professor of philosophy, and he joins us from the studios of KVEC in San Luis Obispo. And nice to have you with us today.
Mr. PATRICK LIN (Director, Ethics and Emerging Sciences Group, California Polytechnic State University): Hi, Neal. How are you?
CONAN: I'm well, thanks. People might hear what I just said and think we're talking about "The Terminator" here.
(Soundbite of laughter)
Mr. LIN: A lot of people do. A lot of people do. And it's natural to link the things that are going on in military robotics with "The Terminator" since "The Terminator" is so huge in the public consciousness.
CONAN: I suspect we're a few years away from that sort of technology, though.
Mr. LIN: Hopefully, yeah, yeah. I think we're several years away. But, you know, like I said, that's one of the first things that people think, and it has a way of coloring the entire debate.
CONAN: Well, as you think about these kinds of things, tell us more about the kinds of inhibitions that can be programmed into, for example, the drones that are being used now, and obviously, these are the predecessors of more advanced machines that we're going to be seeing in the future.
Mr. LIN: Right, right. So there's a trend towards making robots more human. So I mean, we want robots to be human replacements to a large extent.
But there's also a move to make robots less than human, so for instance, robot that don't feel fear or robots that don't have hate. These robots may operate even better than humans can in certain situations, such as on a battlefield during the fog of war. It might eliminate wartime abuses and friendly-fire deaths.
CONAN: And it also might eliminate things like when you're in hot pursuit of something, and people tend to get emotional and make mistakes.
Mr. LIN: That's right. So robots are not just human replacements, but they can be more than humans. They could be more objective. They could be fair. They don't have the same kinds of biases that we do.
CONAN: And this is - I'm curious: How are these kinds of decisions different from something like a mine? A mine is absolutely objective. You step on it, and it goes boom.
Mr. LIN: Right, right. So some people would call a landmine an autonomous object. It's autonomous in that it doesn't have to check back with the human operators to do what it does.
But I don't know. I think my opinion is that that stretches the notion of autonomy and robots. So we don't mean autonomy that way. We don't mean things that, such as toasters. We don't mean them to be autonomous machines.
But autonomy involves more of a capability to think and to make decisions on their own. So a landmine, I mean, yeah, in some sense you could say that it's a robot. It can sense, think and act. So it could sense whether there's pressure on the plate or someone trips a wire. It could think. It could process information in a very mechanical way such that if something trips the wire, then it ought to explode. And it acts on the external world by doing that. It blows up.
But I think most people understand robot to mean something more than that, not just a dumb machine that can mechanically react but a machine that can actually process information in a complex way and make some decisions on its own.
CONAN: And so make distinctions between friend and enemy, make distinctions between wounded enemy and non-wounded enemy.
Mr. LIN: Very, very difficult. So this is a big reason why some people think that autonomous robots in the military are a very bad idea because we just don't have the technical capability to do that anytime in the foreseeable future.
We won't have, or we don't have, the technical capability of programming a computer that can recognize a person holding a gun versus a child holding an ice cream cone, for instance, much less discriminating between an insurgent, who might look exactly like the rest of the civilian population. Indeed, very tough.
CONAN: Human beings have problems with that, as we've seen.
Mr. LIN: Exactly, exactly.
CONAN: And so there are situations, though, that you could see - on the Korean border, for example, between North and South, I understand there are some things that would be considered automated sentries.
Mr. LIN: Right, right. So in the DMZ zone in South Korea and also on the borders of Israel, they have semi and fully autonomous robots that will challenge trespassers. So if you don't punch in your PIN number or give the right identification by a certain amount of time, it could, if it was set on full auto mode, it could fire upon you.
CONAN: And there were more primitive devices like that on the inner German border, inter-German border, that the on the East German side, and they were bitterly criticized.
Mr. LIN: Right, right. And, you know, these robots have been around for some time now, since '70s and '80s. And a lot of the press coverage is focused on the U.S. robots. But I think it's important to recognize that over 48 countries in the world today have semi-autonomous robots, including China, Russia and even Iran.
CONAN: Now do you, as you think about these inhibitions that have been built into these robots, the drones, they're not autonomous robots in the full sense. They don't decide whether to fire or not, but they say wait a minute, I can see that. My programming says that's a mosque, and I'm not supposed to fire on mosques.
Mr. LIN: Well, I think that's the hope, that robots would be able to detect things such as graveyards, wedding parties, mosques, schools. But, you know, many people aren't convinced that that technology is quite there yet.
CONAN: Let's see if we can get some callers in on the conversation. We're talking about robot ethics and the future of warfare with Patrick Lin, director of the Nanoethics Group and a member of the editorial board of the journal Nanoethics. You're listening to TALK OF THE NATION. Call us, 800-989-8255. Email us, firstname.lastname@example.org. And we'll go to Bob(ph), Bob with us from Aberdeen in South Dakota.
BOB (Caller): Hello, gentlemen. Robots are going to be controlled by, quote, "artificial intelligence." And they're going to be limited. We're always going to need the human factor because robots are only going to be able to program to respond based on a simple rule concept, and that is if this, then. In other words, if this happens, then this is your response.
And as a dynamic situation takes place on a battlefield, their responses are going to be canned. And now, they can be plugged with millions of these responses, but it seems to me we're always going to need the human input, and we're not going to be able to make last-minute adjustments to these rules.
And I'm making a point there, and I've got a question: Is there going to be some control over the ethics of these rules? Or would they change from battlefield to battlefield?
CONAN: Patrick Lin?
Mr. LIN: Well, I think the first think you said is true, at least in the near term, that - look, we just aren't able to program all the different possible responses. But also, there are robots out there that seem to operate fully autonomously, such as, you know, the Mars land rover, for instance, where communication between Earth and Mars is difficult, if not impossible, that we need to give machines autonomy.
And there are a couple other examples. For instance, on Navy battleships, there's a system called the CIWS, Close-In Weapon System, which can be turned on to full auto mode. This system can it's also called an R2-D2 system. It looks a little bit like R2-D2.
CONAN: It does, yeah.
Mr. LIN: It could identify, track and shoot down incoming missiles on its own, so there are limited areas in which robots have a fully auto mode, but there are good reasons. There are some strong pressures driving robots in the direction where we want to give robots more autonomy even though, you're right, the party line is that we ought to have a man in the loop at all times.
I think the reality, though, is that people are often slow to react, and given the increasing tempo of warfare, it might make sense for machines or AI to handle some things, such as locating or tracking an inbound missile that's closing rapidly. You know, there's that could be a split-second decision that saves hundreds of lives. So it may make sense for a robot to do that job.
CONAN: Bob, thanks very much for the call.
BOB: You're welcome, thank you.
CONAN: And it does - also raises a question: If more and more robots are used to replace human beings on the battlefield, whether they're remotely controlled by human operators or not, does it make warfare easier, the decision to go to war easier if you are risking fewer of your troops' lives?
Mr. LIN: That's a big criticism that's been levied on military robotics. So you're right. So using robots on the battlefield means that there'll be less of our soldiers coming back in body bags. So this lowers the political costs of having war. So we can expect that there'll be less of a backlash.
And people have worried that, well, is this going to lead towards picking more fights or engaging in more wars? Because we like to have war as our last resort. It's terrible. It's horrible. But if the costs of going to war are lowered, does that encourage our leaders to more quickly pick fights? So yes, you're right.
CONAN: And how active are these discussions? How much money, for example is the Defense department spending on these kinds of programs?
Mr. LIN: I don't know how much money they're spending, but I can tell you that there are many folks in the government and in the military - I work with many of them at, say, the Naval Academy - who are interested in the ethics. So I think they're taking it seriously, but like any other field, I think we can't expect every single person to be - to wear all hats, right. So I mean, you have people who worry about issues like the kinds we're worrying about and other people who are just doing their jobs and making better technology.
CONAN: We're talking about robots in combat and how they are redefining ethics on the battlefield. We want to hear from you. What happens when you remove humans from the loop? 800-989-8255. Email us, email@example.com. Stay with us. It's the TALK OF THE NATION from NPR News.
(Soundbite of music)
CONAN: This is TALK OF THE NATION. I'm Neal Conan in Washington.
Whenever the United States goes to war, there's a debate not only about the decision to fight but how. The discussion is taking on completely new aspects as more robots play greater roles in combat. Scientists are hard at work to design machines that can not only fight but make ethnical decisions as they do so.
Patrick Lin is with us to talk about robots and ethical warfare. He directs the Ethics and Emerging Sciences Group at California Polytechnic State University. And joining us now is Joanne Mariner, director of terrorism and counterterrorism programs at Human Rights Watch, and she joins us from our New York bureau. Nice to have you with us today.
Ms. JOANNE MARINER (Director of Terrorism and Counterterrorism Programs, Human Rights Watch): Thank you.
CONAN: And I know you're just back from Berlin, where you attended a conference on this very topic.
Ms. MARINER: Where these issues were discussed at great length.
CONAN: And did it come to any conclusion?
Ms. MARINER: I think the conclusion is there's a need for the military to be grappling more explicitly with these ethical issues. And the concern is that the technology is driving developments now, and the ethical discussions haven't caught up with the technology.
And I think the fear is that the technology is going forward towards this autonomous and semi-autonomous systems, and the military hasn't really worked out the legal rules and how these legal rules are going to apply to these semi-autonomous and autonomous uses.
CONAN: Were there some examples of specific kinds of and again, Patrick Lin tells us more than 40 countries are dealing with this. It's not just the U.S.
Ms. MARINER: Well, there are certainly examples in which armed military vehicles are being used, and there are concerns that the basic rules of the laws or war, the fundamental rules of civilian immunity and distinction, are not being properly applied.
The difficulty is the areas in which drones, it's mostly unmanned aerial vehicles at this point, where they're being used, are places like the border areas of Pakistan and remote areas of Yemen where human rights advocates don't really have access to do research. So there are a lot of allegations that have been made, but the hard facts really aren't there. The empirical facts aren't there.
And so one of the main points where activists have been calling on the military regards transparency because one of the interesting things about these unmanned vehicles is they have very developed video technology. So there are records kept of what these vehicles are doing. They have, you know, pre-targeting and post-targeting video that could be looked at to see whether civilians have been targeted or whether civilians are collateral casualties of these operations.
CONAN: There are also occasions when informants of one type or another, not just technical ones, go in and try to verify what happened.
Ms. MARINER: Yeah, I mean, the problem is these areas are so remote and so dangerous that human rights researchers really haven't had access to it. And there are a lot of allegations made, but those allegations, or at least the concern is those allegations are tainted by bias. People come out with a point of view to put forward, and it's not clear what the underlying facts are.
CONAN: There is better transparency, though, in other situations within Afghanistan and within the borders of Iraq, where they have also been used.
Ms. MARINER: Yes, they're being used by the military in Afghanistan, both the U.S. and U.K. military, and they were used in Gaza by the Israeli military. So there, human rights groups in fact did have very good access, and we released a report criticizing to some extent Israel's use of drones and the lack of compliance with fundamental rules of war.
CONAN: Let's see if we can get a caller in on the conversation. Let's go to Chris(ph), Chris calling from Oakland.
CHRIS (Caller): Hi, I'd like your guest to comment about the possibility of international treaties being made that govern the behavior of robots as they increase their role and maybe will ultimately come to dominate warfare and how that might work. I mean, it seems like it would be a terrifically complicated thing to do, and then even if some kind of treaty is passed, they'll treat it just like they do treaties now, where they violate them if it's in their interest.
CONAN: Well, the United States is not party, for example, for the landmine treaty, if you consider that a...
(Soundbite of phone hanging up)
CONAN: ...rudimentary example of what we're talking about. I think the caller just - Chris just hung up on us.
But anyway, Joanne Mariner, I think this was under discussion there in Berlin.
Ms. MARINER: Well, I think it's actually very important to underscore that unmanned ground and aerial vehicles are covered by the same rules of war as other weapons systems. So this doesn't mean we're going to see a robot tried for war crimes. The point is that the personnel, the military personnel and CIA personnel who operate those vehicles, are no less legally responsible than soldiers operating other kinds of weapons systems. So the fundamental rules still apply.
I think the complexity is, particularly when you're talking about semi-autonomous and autonomous vehicles, who is the responsible party within the human personnel operating them? Is it the programmer who, you know, didn't come up with a sophisticated enough program? Is it the military commander who put those vehicles into operation? So I think, you know, that needs to be sorted out, or that is a more complex area. But the rules themselves are clear.
CONAN: Patrick Lin, I wondered if you had thoughts on that.
Mr. LIN: Well, I believe that the conference that Joanne was talking about, that's the one run by Noel Sharkey, Jurgen Altmann, Peter Asaro, right? Is that right?
Ms. MARINER: Yes.
Mr. LIN: Okay, yeah. So I understand that they are pushing toward some international treaty. But I would agree with you, Neal, and the caller, which is that we have international treaties on the books that seem to be sidestepped, and oftentimes the U.S. just won't sign on to a treaty such as the Ottawa Treaty, which you mentioned, Neal.
CONAN: The landmine treaty, yeah.
Mr. LIN: The landmine treaty. So this is a problem. But at least, I think it would be a good step in the right direction, that there's some agreement internationally of what we should or should not do with robots.
CONAN: Is it possible to program inhibitions that robots we're talking about way down the road, I would think, but that would prevent robots from taking actions unless they were absolutely sure?
Mr. LIN: Gosh, that's a tough question. That's a really tough question. There's been a lot of work done recently in the area of programming ethics into robots. So for instance, Colin Allen and Wendell Wallach have a book out called "Moral Machines." And they look at various approaches to programming robots, whether top-down or bottom-up or some kind of hybrid. And it doesn't seem that there's any one really good approach to use. There are pros and cons to many different approaches.
But also as one of your other callers pointed out that these robots are, you know, they're guided by these if-then commands. And is it even possible to anticipate all the different scenarios? Or should we allow a robot to learn as it goes, in which case we really can't predict what it'll do?
CONAN: Let's see if we can get another caller in, and let's go to Michael(ph), Michael with us from Tallahassee.
MICHAEL (Caller): Hi, Neal, big fan. Thanks a lot for having me on.
CONAN: Thanks for the kind word.
MICHAEL: I was interested by your question. At the top of the hour, you said: What happens when we remove humans from the battlefield? A couple things: One is I think right now we're look at what happens when you remove humans from one side of the battlefield. That's been a big issue, obviously, you all were talking about Pakistan. But also, I thought, you know, what if we really could get to a place where we had no humans on the battlefield, and we fought these sort of proxy wars, you know, with just robots, you know, as if it were just like a video game.
You know, I know that Fijian islanders, I'm pretty sure it's them, they have these games where instead of war, they play this sort of football game almost, you know, where some people will die, but you try and keep the loss of life to a minimum. But the winner is still recognized and so forth. So I just thought that was interesting.
CONAN: It is sort of ritualized warfare and maybe have it on the dark side of the moon...
(Soundbite of laughter)
CONAN: ...so that you don't have...
MICHAEL: That would be nice, right.
CONAN: Environmental damage. Patrick Lin, it seems like we're pretty far away from that, too.
Mr. LIN: Right. We are far away from that, and also it's not clear that the war is really over, even if you're able to have this civilized agreement that we're just going to have robot-on-robot violence.
So it could be that if one side is defeated, and the other side still doesn't want to the losing side still doesn't want to give up, well then what? Do the robots turn on humans? You know, how do we keep fighting until the last man standing or what?
So it's not clear that even if we replace everyone on the battlefield with a robot that all our problems can be solved, that we just have robot-on-robot war. I'm not optimistic that people are going to give up that easily.
CONAN: And I'm not sure that you would ever get anybody who's quite even on that front. But Joanne Mariner, you'd think that in a context where you had two technologically sophisticated combatants, the drive to push these kinds of developments would accelerate even past where it is now.
Ms. MARINER: Yeah, and I think that raises an issue that is on people's minds and I think on the military's mind, which is as these technologies develop, they're going to disseminate, and it'll be also non-state actors that gain access to robotic technology.
And, you know, what are the rules - I mean, I think there is a real concern that, obviously, there are non-state actors, terrorist groups that in no way abide by the laws of war. And I think Americans are fairly sanguine or they find it fairly easy to ignore that the U.S. military has drones flying over Afghanistan and the CIA has drone flying over Pakistan. But the idea that, you know, more worrying groups would get access to this technology, I think, you know, that should be kept in mind as we're developing the rules that apply.
CONAN: Michael, thanks very much for the call.
MICHAEL: Thanks a lot.
CONAN: Bye-bye. Let's go next to Kate(ph), Kate with us from Fort Leavenworth in Kansas.
KATE (Caller): Hi. Thanks for taking my call.
KATE: I'm personally interested in this topic. I'm an army wife of an active-duty major, and I was an army nurse for 10 years and worked at Walter Reed Army Medical Center, so I definitely saw the firsthand the effects of war on our young men and women. And I'm a big fan of anything that is serviceable and can lower our casualty count. But my question is more specifically, are there any robotic technology or robots being used now that employ any non-lethal tactics or non-lethal weaponry that we've been using?
CONAN: Reconnaissance is a big usage of the drones. They were first developed for reconnaissance purposes. Right, Patrick Lin?
Dr. LIN: Right, right. And the Israelis were one of the first pioneers of that technology as well. So, yeah, there are robots out there that are armed and dangerous, but there are also robots out there that have more of a humanitarian mission.
So, for instance, there is a robot called Bear that whose mission is to just go out onto the battlefield and retrieve wounded soldiers. The PackBots, made famous by iRobot, their job is to go out on the roads and clear the path of IEDs. So it's not that all these robots are trained to kill people. But many of them play a direct role in saving lives.
CONAN: And also, some of them - the aircraft, unmanned aircraft are used to spoof electronic systems or send out signals that confuse people.
Dr. LIN: That's right.
CONAN: All right. Kate, does that answer your question?
KATE: It does. Thanks so much. I appreciate it.
CONAN: Thanks very much for the call. We're talking with Patrick Lynn who's a member of the editorial board of the journal NanoEthics, assistant professor of philosophy at Cal Poly in San Luis Obispo, studying the future of artificial intelligence and warfare. Also with us, Joanne Mariner, director of terrorism and counterterrorism program of Human Rights Watch, who wrote a recent article, "When Machines Kill."
And you're listening to TALK OF THE NATION from NPR News.
And let's get Ben(ph) on the line. Ben's with us from Birmingham.
BEN (Caller): Hi. How you guys doing? Wanted to touch on something you said earlier on the program. You said, you know, by putting robots on the battlefield, you would take the emotion out of it. Now, as an officer in the military, I'm all about technology. It helps us do our job, you know, that much better. But by taking emotion off the battlefield, you took the reason for going to war away in the first place.
Warfare is one of the most emotional things that you're going to do, and by taking the humans from that, you're making it that much easier to go to war, and all the reasons that you're going to war become void at that point.
CONAN: Patrick Lin?
Dr. LIN: Well, let me think about this one. I don't know if it's so important that every link in the military supply chain must feel the urgency and emotions related to war. I mean, it seems that - that's the job of the political leaders. There are many times where we don't want our soldiers to be that fired up. So, for instance, it may cause a loss of judgment and friendly fire deaths or wartime abuses.
So emotions in war are a tricky thing. So on one hand, war is emotional. It's horrible, and it's terrible. And by creating these robots, we seem to be sanitizing war. And it's unclear whether that's good or bad. Maybe war ought to be terrible. So I think it was General Robert E. Lee during the Civil War said that it is well that war is so terrible or otherwise we would grow too fond of it. So this is the same sentiment I think Neal and the caller just expressed.
CONAN: Joanne Mariner, I wonder if you had a thought.
Ms. MARINER: Well, I think, you know, there are negative emotions, the desire for revenge, hatred of the enemy that can lead to wartime abuses. And we - you know, we have some trials going on or court marshals going on right now that reflect that. So I think the hope is that robots are not driven by fear. They're not driven by hatred. They're simply programmed. And the question is, can they be programmed to discriminate and to follow the laws of war?
BEN: Yeah. All that makes sense, and that's where you come into training, and that's where myself as an officer, you now, train my troops that we do the correct things. But it's the emotions and it's the ethics on the battlefield that create the objective, that the reasons that you're going to war - I'm trying to find the right words to express it, how I feel, me and my colleagues, you know, being in the military as the ones that actually are going to persecute the combat.
But it's what keeps you on the straight and narrow and in the thick of battle, even though you can get confused and you lose focus and, yeah, you do experience that rage, but it's also the good thing that makes sure that you're on the right side of the issue. And for...
Ms. MARINER: But I think perhaps what the caller is pointing to is a fear that depersonalizing warfare and depersonalizing the enemy will allow more abuses to occur.
BEN: That's exactly what I'm trying to say.
Ms. MARINER: And I think that's a valid concern. I mean, the concern has been raised that drone operators who are thousands of miles from the battlefield can just kind of adopt a PlayStation mentality where they don't see the reality of what they're doing. I mean, I think, empirically, that hasn't been shown yet, but it's a valid concern.
CONAN: Patrick Lin?
Dr. LIN: Right. Actually, the scenario can go the other way. I've heard reports back that these drone operators sitting in trailer in Las Vegas are actually more attached and they're more emotionally vested, because they're tracking these targets for potentially days before they fire upon them. And I've heard some reports where after they had to make an attack decision or they pulled the trigger that it affected them. It affected them hard.
CONAN: Ben, thanks very much for the call. Interesting question. Good luck to you.
BEN: Thank you.
CONAN: Bye-bye. We're also going to have to say thank you to Patrick Lin, who joined us from a studio in San Luis Obispo. He's director of the Nanoethics group and works at Cal Poly there in San Luis Obispo. Appreciate your time today, sir.
Dr. LIN: It's a pleasure
CONAN: And Joanne Mariner joined us from our bureau in New York. She's director of terrorism and counterterrorism program at Human Rights Watch. Appreciate your joining us today, too.
Ms. MARINER: Thank you.