San Francisco considers allowing police robots to use lethal force From sci-fi to the streets, the San Francisco Board of Supervisors considers a policy proposal on whether the San Francisco Police Department can use robots as a deadly force.

San Francisco considers allowing law enforcement robots to use lethal force

  • Download
  • <iframe src="https://www.npr.org/player/embed/1139523832/1139544206" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

ARI SHAPIRO, HOST:

Lethal robots will be on the agenda tomorrow at a meeting of the San Francisco Board of Supervisors. The question is whether the city's police department can use robots to kill people.

Professor Ryan Calo studies robotics and law at the University of Washington. Welcome back to ALL THINGS CONSIDERED.

RYAN CALO: Glad to be here.

SHAPIRO: The Dallas Police Department used a robot to kill a suspect back in 2016, so this is not unheard of. What kinds of scenarios are we talking about here?

CALO: Typically, the way that the police use robots is either to gain situational awareness using a drone in the air or else to investigate a suspicious object that could be a bomb or to negotiate with a suspect in a hostage situation. There's been almost no instances of any violence at all through robots except for Dallas. But there have been multiple instances when people have shot robots. So people in hostage situations have unloaded shotguns on police robots before and shot robots out of the air. And so there is a - the history of violence in robots goes human to robot, and the idea that the robots would be able to fire back is disturbing.

SHAPIRO: And so do you think San Francisco having this debate is letting science fiction leach into reality, or is it a helpful setting of rules before we get to a place where people are confronted with a situation there may not be rules for?

CALO: I do think that police departments that have robots should have policies in place talking about how the robots can and can't be used. I think that's healthy. Often the way that police shootings are justified or attempted to be justified is by reference to the officer's safety. So the dream with robotics is that if nonlethal force, for example, could incapacitate a suspect without the officer ever feeling threatened, then there would be less reason for police to use force. But in actual fact, it is very difficult to gain enough understanding of a situation from a distance with a robot to know when to use force and when not to.

SHAPIRO: So you say it would be good for police departments to have standards to follow. Let's talk about what that standard should be. In San Francisco, the initial policy draft prohibited use of robots to deploy deadly force. Now, the current draft policy says, quote, "robots will only be used as a deadly force option when risk of loss of life to members of the public or officers are imminent and outweigh any other force option available to SFPD." What do you think of that as a standard?

CALO: First of all, I think it's good for the police department to set standards in advance. And second, I'm glad to see that the standard is so narrow, that you really have to be out of other options. Still, you worry about whether there'll be any situation where the public would feel comfortable with police using lethal force through a robot. You know, even if we thought that there might be some scenarios where our best option is to incapacitate a suspect through deadly force through a robot, it feels so deeply dehumanizing and militaristic. And so, you know, the very prospect of a robot being able to kill someone could be something that the people of San Francisco won't tolerate, even if, you know, there are some very narrow circumstances where it's sort of the best option from a tactical perspective.

SHAPIRO: And then there's also the question of accountability. If there's a scenario where a robot hurts or kills the wrong person, who gets put on trial for that?

CALO: Yeah. I mean, it's so interesting the way in which technology, and especially robotics, makes a kind of shell game of responsibility, right? You don't know who is responsible. Is it the officer operating the robot? Is it the people that made the robot? Sometimes it feels like it's the robot itself, right? I mean, if there were an incident where an officer hit someone with their car or shot somebody, we would expect that the very next day, there would be police cars and guns on the streets. They're just equipment. But when a robot is involved in violence, we would expect the whole robotics program to be suspended. Why is that?

And it has to do with the fact that robots feel different to us. We associate them with science fiction, and we're deeply uncomfortable with them being armed. And if you couple that with the racialized, often untrusting and uncomfortable policing environment we have today, that's not a good mix, right? I mean, technologies that we don't totally understand or trust in the hands of a police force that is still reckoning with century of racial violence, it's not a comfortable combination.

SHAPIRO: That's Ryan Calo. He's a law and information science professor at the University of Washington. Thanks a lot.

CALO: Thank you, Ari.

MARY LOUISE KELLY, HOST:

After we taped this conversation, a spokesperson for the San Francisco Police Department wrote us to say, quote, "no policy can anticipate every conceivable situation or exceptional circumstance which officers may face. The SFPD must be prepared and have the ability to respond proportionately."

Copyright © 2022 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.