This Tech Worker Objected To Company's Work On Military Project Tech workers have increasingly been asking ethical questions about their industry's involvement with the military. One such worker took her company's CEO to task.
NPR logo

When Technology Can Be Used To Build Weapons, Some Workers Take A Stand

  • Download
  • <iframe src="https://www.npr.org/player/embed/722909218/722959408" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
When Technology Can Be Used To Build Weapons, Some Workers Take A Stand

When Technology Can Be Used To Build Weapons, Some Workers Take A Stand

  • Download
  • <iframe src="https://www.npr.org/player/embed/722909218/722959408" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

ARI SHAPIRO, HOST:

Even if you can advance technology, create the next great app or a robot that fights wars, should you? We're exploring that question on this month's All Tech Considered

(SOUNDBITE OF ULRICH SCHNAUSS' "NOTHING HAPPENS IN JUNE")

SHAPIRO: Here in the U.S., there is little government oversight of the tech industry. So more and more it is the tech workers themselves who are raising ethical concerns. NPR's Jasmine Garsd reports on one company and an employee who says she'd had enough.

JASMINE GARSD, BYLINE: Earlier this year, on the night of January 16, Liz O'Sullivan hit send on a letter she'd been working on for weeks. It was directed at her boss, Matt Zeiler, the founder and CEO of Clarifai, a tech company.

LIZ O'SULLIVAN: The moment before I hit send, I mean, and then afterwards, my heart - I could just feel it racing.

GARSD: The letter asked the question, is our technology going to be used to build weapons? O'Sullivan is 34. She's from the generation that saw the birth of high-speed Internet, Facebook, Venmo, Uber. She often describes technology as magic.

O'SULLIVAN: There are companies out there doing things that really look like magic. They feel like magic.

GARSD: O'Sullivan's story begins two years ago, when she started working at Clarifai. She says one of her jobs was to explain the company's product to customers. It's visual recognition technology. It's used by websites to identify nudity and inappropriate content. Doctors use it to spot disease. It was a startup. But shortly after O'Sullivan joined, Clarifai got a big break - a government contract reportedly for millions of dollars. It was all very secretive.

At first, the people assigned to work on that government project were in a windowless room with the glass doors covered. O'Sullivan would walk by and wonder, what are they doing in there? Matt Zeiler, CEO of Clarifai, says the contract required secrecy. But everyone working directly on the project knew what it was about. Here's Zeiler.

(SOUNDBITE OF ARCHIVED RECORDING)

MATT ZEILER: We got briefed before even writing a single line of code. And I also briefed everybody I asked to participate on this project.

GARSD: NPR spoke to one employee who did work directly on the project. That person, who requested anonymity for fear of retaliation, said many of the workers in that room were not entirely clear what this was going to be used for. The technology they were putting together, it's the same that they had been working on for other projects. In the months that followed, former employees say information started trickling down. They were working with the Department of Defense.

Then people working on the project got an email that outlined some details. In the text, a blink-and-you'll-miss-it reference to something called Project Maven. The Pentagon told NPR that Project Maven was created in April 2017. It's also called algorithmic warfare. Its first task was to use computer vision technology for drones in the campaign against ISIS.

BEN SHNEIDERMAN: This could be more effective than humans, who might miss something or misunderstand something, that the computer vision could be more accurate.

GARSD: That's professor Ben Shneiderman, a computer scientist at the University of Maryland, talking on Skype. He had serious ethical concerns about the project. He wasn't alone. Many people in the tech world were starting to wonder, what is this technology we're building going to be used for down the road? Liz O'Sullivan says this question began to haunt her, too. The big fear among tech activists is, will this be used toward building autonomous weapons? That's weapons that are programmed to find targets and kill people without human intervention.

The Department of Defense's current policy requires that autonomous weapons, quote, "allow commanders and operators to exercise appropriate levels of human judgment." It's a definition many find murky. And in 2018, tech workers began to ask a lot of questions. Here's professor Shneiderman again.

SHNEIDERMAN: It's a historic moment of the employees rising up in a principled way, an ethical way and saying, we won't do this.

GARSD: Microsoft employees protested their company's work with Immigration and Customs Enforcement. And several thousand employees demanded that Google stop working on Project Maven. Google did not renew its contract with the project. In June of last year, Clarifai CEO Matt Zeiler also weighed in. In a blog post, he explained why the company was working on a military project. Liz O'Sullivan read that with interest.

O'SULLIVAN: You know, the people running these companies are sort of techno utopians. And they believe that tech is going to save the world and that we really just have to build everything that we can and then figure out where the cards fall. But there are a lot of us out here saying, should we be building this at all?

GARSD: Former Clarifai employees told NPR that at the office, the mood got tense. There were plenty of people who felt comfortable working on Project Maven. Others resented that it had been so secretive. And some just found it morally troubling. As the months went by, O'Sullivan says she realized she couldn't change the direction of the company. So at the beginning of this year, she wrote that letter to CEO Matt Zeiler and sent it to the whole staff. Here she is reading an excerpt.

O'SULLIVAN: (Reading) We have serious concerns about recent events and are beginning to worry about what we're all working so hard to build.

GARSD: She goes on to ask a bunch of questions. Many of them are the same questions being asked across the tech world today. Like, are you going to let us know who we're selling our stuff to? Are you going to vet how it's used? Do we care if this is used to hurt people? A week after she sent that letter, there was a staff meeting where Zeiler spoke.

O'SULLIVAN: He did say that our technology was likely to be used for weapons - and autonomous weapons at that.

GARSD: Clarifai CEO Matt Zeiler does not deny this. In fact, he says, countries like China, they're already doing it. The U.S. needs to step it up.

(SOUNDBITE OF ARCHIVED RECORDING)

ZEILER: We're not going to be building missiles or any kind of stuff like that at Clarifai. But the technology, like I was saying, is going to be useful for those. And through partnerships with the DOD and other contractors, I do think it will make its way into autonomous weapons.

GARSD: Here's where he and O'Sullivan disagree. Should companies like Clarifai, Google and Amazon be involved in military projects? Zeiler says Clarifai's technology is going to help save American soldiers.

(SOUNDBITE OF ARCHIVED RECORDING)

ZEILER: At the end of the day, they're out there to do a mission. And if we can provide the best technology so that they can accurately do their mission, you know, in the worst case, there might be a human life at the other end that they're targeting. But in many cases, it might be a weapons cache that's not - any humans around or a bridge to slow down an enemy threat.

GARSD: And Zeiler says also it's going to help minimize civilian casualties by improving the accuracy of weapons. O'Sullivan wasn't buying that. She quit the day after the staff meeting. She describes herself as a conscientious tech objector. She went on to join a startup that advises companies on how to make trustworthy artificial intelligence. She says she still thinks tech can be really wonderful or really dangerous, like playing with magic. Jasmine Garsd, NPR News, New York.

Copyright © 2019 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.