Researchers Are Using Artificial Intelligence To Stop African Elephant Poachers Conservationists are deploying audio recorders, neural networks and predictive analytics in a bid to save elephants.
NPR logo

49-Minute Listen

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Elephants Under Attack Have An Unlikely Ally: Artificial Intelligence

49-Minute Listen

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

DINA TEMPLE-RASTON (HOST): From NPR, this is I'LL BE SEEING YOU, a four-part series about the technologies that watch us. I'm Dina Temple-Raston. A few years ago, there was huge news for elephant researchers.


UNIDENTIFIED RESEARCH TEAM MEMBER #1 (RESEARCH TEAM MEMBER): A new study, the Great Elephant Census, suggests a failure to protect the world's largest land mammals, elephants.

TEMPLE-RASTON: And the census found that in just seven years, one-third of the savanna elephant population had disappeared. But there was one kind of elephant that they left out.

PETER WREGE (CORNELL UNIVERSITY): The Great Elephant Census depended very heavily on fixed-wing small aircraft because, in the savanna, you can fly over it and count herds of elephants. That is not possible to do in the rainforest because they're under the canopy.

TEMPLE-RASTON: The rainforest - where the forest elephant lives. And the forest elephant happens to be the very species that Peter Wrege from Cornell University specializes in.

WREGE: Of the two African elephants, they're very much more endangered, and this is really because their ivory is the most prized of any ivory. It's denser than savanna elephant ivory, and it has a pinkish tinge to it.

TEMPLE-RASTON: Now, Wrege knew that the forest elephants were at extraordinary risk. But what he didn't know was - how fast were they being killed? Where were they being killed? And if he didn't know how many forest elephants there actually were, how could he protect them? So Wrege and his team decided to do a kind of great forest elephant census, which is actually even harder than it sounds because the forest where they live is incredibly dense.

WREGE: Sometimes you see them, let's say, 15 meters away from you, and they move 5 meters into the forest, and you can't see them. Somehow, they just disappear.

TEMPLE-RASTON: For a long time, they felt that the best way to try to count these elephants was simply staking off a part of the forest and counting dung piles.

WREGE: There's an estimate of - how often does an elephant poop? And how long does a pile of dung last?

TEMPLE-RASTON: You have to really love elephants to do that one. And then there's also a DNA method, where you test the poop to try to identify individual elephants and count them that way. None of this was very efficient - or very accurate. So researchers from Cornell University decided to try something new - to just listen for them.

WREGE: If we know how often an elephant gives a vocalization and we can record it, then we can spread recorders over a big area and record their vocalizations and use those numbers to count them.

TEMPLE-RASTON: Wrege had 50 custom audio recorders made, divided the rainforest up into a grid and then headed to Central Africa. Every 5 square kilometers in the rainforest, they placed an audio recorder.

WREGE: We put recorders 7 to 10 meters up in a tree, hanging from a tree limb.

TEMPLE-RASTON: Thirty feet in the air happens to be just a little higher than an elephant standing on its hind legs can reach with its trunk.

WREGE: They've destroyed them once in a while. They've stuck a tusk through them.

TEMPLE-RASTON: So Wrege's team climbs these trees, straps these audio recorders in place. And then they just hit record.


TEMPLE-RASTON: This is the actual audio from one of those recorders.

WREGE: We record anything that makes an acoustic signature, including things that are not vocalizations, like the pounding on a tree buttress by chimpanzees.


TEMPLE-RASTON: So these tape recorders are designed to record for about three months, after which Wrege sends the teams back into the forest to climb the trees and...



WREGE: Bring the recording units back down, change batteries, change the SD card so we have the recordings and put them back up in the trees.




TEMPLE-RASTON: Wrege says that every time they get one of these recordings back, it's exciting.

WREGE: It's a combination of excitement and kind of apprehension because 50 recorders recording 24 hours a day for three months is a lot of stuff to get through.

TEMPLE-RASTON: A lot of chimps and frogs and birds, just to hear what he really wanted - recorded elephants.


WREGE: Just like - whoa, we've got it.


TEMPLE-RASTON: Wrege was incredibly excited that his giant acoustic census might just work. And then he's thinking, how can I possibly get through everything I've recorded? Wrege had a classic big-data dilemma. In AI circles, he was having something called a cocktail party problem.


AUDREY HEPBURN (ACTOR): (As Holly Golightly) Mike, darling, I tried reaching you all day long.

TEMPLE-RASTON: He needed to find specific things in a huge dataset, and that happens to be a perfect job for artificial intelligence. The human brain has this amazing way of focusing on a specific person's voice and then magically amplifying it.


GEORGE PEPPARD (ACTOR): (As Paul Varjak) Well, he did mentioned something about calling the police.

TEMPLE-RASTON: Wrege needed to find a way to have a computer do that. And it turns out that a subset of AI, something called neural networks, is really good at that. All you have to do is turn the sound you're trying to analyze into a picture.


PEPPARD: (As Paul Varjak) OK. The party's over. Out.

TEMPLE-RASTON: Now, there's something called a spectrogram, which is exactly that - a ghostly little picture of a soundwave. So the AI analyzes the spectrogram, recognizes the characteristics of the soundwave, focuses in on it, amplifies it.

A researcher at University of California, Santa Cruz was having a cocktail party problem of his own. He had collected huge volumes of bird songs.

MATTHEW MCKOWN (BEHAVIORAL ECOLOGIST, UNIVERSITY OF CALIFORNIA, SANTA CRUZ): You know, sitting on mountaintops with tape recorders.


TEMPLE-RASTON: That's behavioral ecologist Matthew McKown. He studies obscure birds.

MCKOWN: I was one of the few people in the world, I think, that used MiniDiscs. I was super excited about MiniDisc recorders.

TEMPLE-RASTON: And he recorded all this information and then thought to himself, now what?

MCKOWN: So I started to look for ways that we could process the data more easily.

TEMPLE-RASTON: And that's when he and a colleague started talking about neural networks. So neural networks are called neural networks because they process things like the brain does. The easiest way to think about it is in terms of layers, one on top of another. Neurons in the first layer of the network would likely recognize something simple, like pitch or modulation; something that might characterize a bird or an elephant or...


TEMPLE-RASTON: ...For the purposes of our discussion, a particular instrument...


TEMPLE-RASTON: ...Say, violins.


TEMPLE-RASTON: And to find violins in an orchestral piece of music, it might have to start to look for that modulation. The next layer of neurons might look for a range of notes that it's learned is associated with violins.


TEMPLE-RASTON: And it thinks, hm, I recognize some notes that a violin might play. And then one of these neurons lights up.


TEMPLE-RASTON: Then middle layers would build on that, focusing in on the other qualities that might be associated with the violin, like the presence of strings. It might pick out a cello or a piano for their string-like sound.


TEMPLE-RASTON: There can be hundreds and hundreds of layers, each getting progressively more refined. Some, perhaps, filtering out things that the network decides definitely are not violins, like, say, percussion.


TEMPLE-RASTON: It says to itself, this is not a violin.


TEMPLE-RASTON: And all this gets melded together with one layer after another, taking the network closer and closer to its goal - recognizing the violin...


TEMPLE-RASTON: ...Until, at the end, the network looks at all the information that it used to filter down to the violin, and it makes a statistical calculation. What's the likelihood that this pattern I've identified is a violin or an elephant or a bird - 50%; 80% - until it gets what it wants.




MCKOWN: This whole field is called deep learning.

TEMPLE-RASTON: That's Matthew McKown again.

MCKOWN: Because of the advent of sort of modern computing technology, you can make these neural networks that have many, many layers of neurons. And so you can start to recognize real fine-scale patterns.

TEMPLE-RASTON: McKown ended up starting a company called Conservation Metrics to help people like Peter Wrege measure various things in conservation.

MCKOWN: And specifically, what we do is turn that information into actionable information for people on the ground.

TEMPLE-RASTON: So Wrege's 50 recorders running for three months produce about 100,000 hours of audio. McKown says the neural network can rip through that in about four hours. Thank you, cloud computing. And then it finds something like this.


WREGE: That, actually, is a fantastic example of two females who are performing what we call a greeting ceremony.

TEMPLE-RASTON: Again, Peter Wrege - and he says this sequence of sounds - they're known as rumbles - happen when elephants who have been separated for some time...


TEMPLE-RASTON: ...Come together again.

WREGE: I think it's very much like if you run into a friend on the street, you know, that you haven't seen for a while - whoa, how are you? And, oh, I'm OK. What about you? Oh, it's not so good. I've lost my job. Oh, my God. You know, who knows what they're really saying, but it's that - perhaps that kind of thing.

TEMPLE-RASTON: But remember; the reason Wrege wanted to listen to the forest elephants was because he wanted to protect them. And while he was listening to months and months of rain forest audio, he also heard something rather shocking - gunshots.


TEMPLE-RASTON: Now, it's impossible to know if an elephant died from these particular gunshots. But Wrege decided that if his team was going to count elephant rumbles, they should also find a way to count the gunshots because they could be a pretty good proxy for poaching attempts. So he has McKown building another neural network that will learn how to tell the difference between the sound of a gunshot...


TEMPLE-RASTON: ...And the sound of a branch breaking.


TEMPLE-RASTON: The great acoustic forest elephant count is still going on. The neural network is still training, and getting an accurate count depends on lots of things - weather, the season, where in the forest they're listening. So there isn't a precise number yet, but the Elephant Listening Project has already started learning things that can help fight poaching. For example, Wrege's team has found that elephants don't go to some parts of the forest during specific times of the year.

WREGE: You can say, OK, we know that elephants are not using this huge part of this park for these seven months. We don't need to send any anti-poaching teams there because no poachers are going to find an elephant anyway.


TEMPLE-RASTON: And every time they get a new set of recordings and feed it into the neural network, they get closer to an accurate count. Wrege imagines the network learning to distinguish elephant calls and to be able to tell if they're in distress or danger.

WREGE: We have two gunshots here or we have AK-47 shots here. If that can be fed out of the rainforest in real time, then the anti-poaching people know where to go to intercept that poacher that just killed an elephant.

TEMPLE-RASTON: Do you think that AI is going to save the elephants?

WREGE: I actually do. It definitely is going to be our salvation.

TEMPLE-RASTON: This is I'LL BE SEEING YOU from NPR, a series about the technologies that watch us. When we come back, how artificial intelligence is helping rangers stay one step ahead of poachers. And we go to a park in Malawi where they're starting to use algorithms to predict where poachers will go before they even fire a shot. I'm Dina Temple-Raston. Stay with us.


TEMPLE-RASTON: From NPR, this is I'LL BE SEEING YOU, a show about the technologies that watch us. I'm Dina Temple-Raston. And on today's show, we've been looking at how artificial intelligence is saving the elephant. AI is helping environmentalists and researchers process huge amounts of data and mine it for information because AI doesn't get tired like humans do, and it may be better at teasing out patterns in near real time. They're testing this theory in southern Malawi. And we went to see for ourselves.


TEMPLE-RASTON: Liwonde National Park is about 200 square miles of fields and forests that run alongside the Sheri River. And the night we arrived, the first thing we did was go out on the water.

CRAIG REID (MANAGER, LIWONDE NATIONAL PARK): All right, just a quick safety thing for everybody, please.

TEMPLE-RASTON: Craig Reid is the manager of Liwonde National Park.

REID: There are lots of crocodiles here in this river.

TEMPLE-RASTON: We go out on the river at sunset, and we're skimming along the water in a metal-bottom boat.

REID: And people are very much on the menu, so don't get tempted to put your hand out and run your fingers through the water or anything romantic like that - not a good idea.

TEMPLE-RASTON: And the river is glassy, and we cut through reeds close to the shore and watch all the animals coming out.

Oh, look; there are hippos there. So as the boat's going by, they're just dropping into the water, holding their breaths and then coming up and snorting all the air out.

There are some elephants on the opposite bank, splashing in the water. Liwonde has a huge number of elephants. And to get close to one, you really have to go on land and use that old-fashioned tracking method we talked about earlier in the show.

We saw the fresh dung piles along the road and followed those and found him in this clearing.

An enormous elephant is right in front of us.

You'd think an elephant is so huge that you'd hear him walking all the time. But what they say is the only time you hear an elephant is when he's coming and he breaks a branch, or when he's eating. And what we just got was an elephant eating.

Liwonde National Park is full of scenes just like that. In addition to hippos and elephants, it has a rare black rhino sanctuary, baboons of every variety, gazelles, water bucks, warthogs. They recently reintroduced lions into the park, and they have some cheetahs. Seeing all this, it's hard to believe that just four years ago, Liwonde was a failed park. Here's park manager Craig Reid again.

REID: So I always described Liwonde as we found it as being in a state of terminal decline.

TEMPLE-RASTON: The buildings, roads, infrastructure - they were all falling apart.

REID: So effectively, what would have happened had we not intervened would be a total elimination of all wildlife over the 10-year period following.


TEMPLE-RASTON: Four years ago, a nonprofit called African Parks wanted to try to bring Liwonde back from the brink. And it so happens that around that same time, one of the founders of Microsoft, Paul Allen, was just wrapping up the Great Elephant Census in effort to count all the elephants in Africa. Allen started a company whose goal was to find ways to use technology to save the elephants. He called the company Vulcan and hired people who love elephants, like Pawan Nrisimha.

PAWAN NRISIMHA (DIRECTOR OF PRODUCT MANAGEMENT, VULCAN): Who doesn't love elephants, right? I mean, I love elephants.

TEMPLE-RASTON: Nrisimha was a coder. Now he's the director of product management at Vulcan. He and the technologists there started building an AI solution for this elephant problem; a solution that would take all the information that park rangers, like Craig Reid, had both in their heads and in daily reports and then make it smarter. They called the program EarthRanger, Nrisimha explains.

NRISIMHA: EarthRanger was entirely customer driven. And when I say customers here, it's essentially the park rangers.

TEMPLE-RASTON: The Craig Reids - the idea was to feed all this information into a very accessible AI and machine learning platform.

NRISIMHA: So it's a real-time visualization of where all the park assets are. And when I say park assets, it's the rangers. It's the animals. It's information from various sensors.

TEMPLE-RASTON: Things like security cameras, GPS locations of vehicles, motion-activated sensors on park gates - so that was the first part; a real-time data visualization of the entire park.

NRISIMHA: The second part is providing them analytics tools to be able to manage their patrols better. So these are things like heat maps that helps them - OK, these are the areas where most of the snaring or animal traps are happening, or these are the areas where there are lots fence breaks happening.

TEMPLE-RASTON: That allows them to respond to security problems in the park just by clicking a mouse on the EarthRanger screen and running those incidents back, kind of like playing an interactive video game. And then there is an artificial intelligence piece.

NRISIMHA: Gathering all this data and then we are trying to see how we can really let this artificial intelligence or predictive analytics to proactively tell the park management how to do the patrols and manage the security effort better.

TEMPLE-RASTON: So in other words, take all this information they're gathering so a computer can learn about the rhythms of Liwonde Park and eventually offer suggestions on how to keep it safe.


TEMPLE-RASTON: Liwonde National Park has been using EarthRanger for the past two years. And we went to see it in action in a little brick building behind the ranger headquarters.


UNIDENTIFIED PERSON #2: (Foreign language spoken).

TEMPLE-RASTON: If you've ever been in a command center of a small police department, this is a little like that. There are flat screens on the wall, closed circuit television monitors and, on two long tables, a series of computers analyzing and categorizing information coming into headquarters.

Liwonde's operations manager is a man named Lawrence Munro, and part of his job is to plan and schedule ranger patrols. He's been using EarthRanger to help him figure out where to deploy them.

LAWRENCE MUNRO (OPERATIONS MANAGER, LIWONDE NATIONAL PARK): If you can kill the (unintelligible) for this circle.

TEMPLE-RASTON: To get an idea of what I'm seeing, there's a real-time satellite image of the park up on a flat screen. It shows the river, clearings, forests.

MUNRO: Message (unintelligible).

TEMPLE-RASTON: And there are little elephant icons tracking GPS location signals from collared elephants and little rhino icons in a sanctuary at the center of the park.

MUNRO: Because we will use these guys for some operations that are coming up in this circle.

TEMPLE-RASTON: Munro and enforcement chief Paul Chidyera move back and forth between a whiteboard at the back of the room, and they populate the EarthRanger screen with GPS locations of snares and footprints rangers had found in the park in the past month.

MUNRO: You can have a look on the spreadsheet to just have a look what areas are worse affected (unintelligible).

TEMPLE-RASTON: Munro starts clicking a mouse through various EarthRanger menus.

MUNRO: I'd say we're going to look across the previous moon phase.

TEMPLE-RASTON: He moves through an animation of suspicious activity in the park.

MUNRO: And then if I can ask you to run the snares from the 15th of March, maybe just a tad...

TEMPLE-RASTON: You can see a concentration of snares. The little icons look like little Western lassos, and there are a bunch of them right off the main road.

MUNRO: So you can see those snares are out there (unintelligible).

TEMPLE-RASTON: Munro and Chidyera decide to set up new checkpoints right where EarthRanger recorded a huge number of snares. If nothing else, that would discourage whoever put them there from setting more of them.

MUNRO: So the old system was a map on the wall. I still have it in my office. And a similar tough process, but you wouldn't be able to see exactly where all your assets are.

TEMPLE-RASTON: Assets - men, planes, jeeps, helicopters.

MUNRO: You have to rely a lot more on memory, a lot more radio traffic. Here, you can do a lot more pre-emptive stuff because you can see the picture.

TEMPLE-RASTON: Pre-emptive stuff - what he means is anticipating where a poacher might show up in the same way that a police department might send extra patrols into a high-crime area. And while this is standard operating procedure in policing, it wasn't really a focus in conservation until recently. And just like police departments have started trying to use computer analytics to find crime patterns, rangers like Munro are doing that, too. But it turns out that poaching in Liwonde is affected by lots of different things - weather patterns, animal movements...

MUNRO: Things like religious holidays, the payday of the government sector.

TEMPLE-RASTON: Bushmeat is considered a delicacy Malawi, so people put in their orders right after they get paid.

MUNRO: We pick up trends. We pick up patterns.

TEMPLE-RASTON: Munro said it used to take a month to start to see patterns emerging because they had to rifle through written reports. Now they can respond to threats faster.

MUNRO: We study it intently every Wednesday because we want to deploy out our guys accordingly. But you can do it daily. That's the difference. It's basically the speed at which you can strategize.


TEMPLE-RASTON: Earlier this year, the head of law enforcement of Liwonde, Paul Chidyera, got an alert on his cellphone. A sensor linked to EarthRanger detected some suspicious movement in the eastern part of the park, so he and his team set up something called a poacher cam. It's basically a motion-activated camera.


TEMPLE-RASTON: Chidyera said there had been an area of the park rangers had found a lot of human tracks. So you can imagine if you just put up a motion-activated camera in a game preserve, you'll be getting lots of pictures of animals that go by. But these cameras have an algorithm built inside that helps a camera determine whether whatever's going past has a human shape or an animal one. If it looks human, it snaps a picture and sends it with GPS coordinates to EarthRanger and staff cellphones. The poacher cam got a photo of someone coming in, and Chidyera's team went to a nearby village and found him.

CHIDYERA: When he was arrested, he was confronted. And he revealed what he has been doing.

TEMPLE-RASTON: As fate would have it, he was a well-known poacher in the area. In fact, as they scrolled through EarthRanger's database, they found other pictures of him coming into the park with weapons. Those pictures were submitted as evidence in his trial. Each photo was used to support a different criminal count of trespass and weapons charges. In the end, the poacher was sentenced to 27 years as a repeat offender.

As a tourist, elephants may seem magical, but it's important to understand that to people who live right next to them, they can be a nuisance or even a threat. And I got a real sense of this walking the fence line along Liwonde Park.

Workers put this up?


TEMPLE-RASTON: So they're all from villages around here?

CHIDYERA: From the villages.

TEMPLE-RASTON: I'm getting a tour of a relatively new fence at Liwonde Park when we run into this local guy.

Can we ask you, sir? Can we talk?

His name is Various Donzani.

You live close by?

VARIOUS DONZANI (LIWONDE RESIDENT): Yes, my house is 50 meters from the fence.

TEMPLE-RASTON: Fifty meters from the fence?

DONZANI: Fifty meters.

TEMPLE-RASTON: So before the fence, were there elephants that came around your house?

DONZANI: Oh, very much, especially when they come through a garden where there is food. We had problems.

TEMPLE-RASTON: Donzani is a school teacher. But like a lot of people on the edge of the park, he relies on his own garden for food, which attracts elephants. One elephant can wipe out a village garden in a couple of hours.

Do you have an experience with an elephant...


TEMPLE-RASTON: ...That you can tell us about?

DONZANI: It's fortunate that I'm not killed.

TEMPLE-RASTON: These elephant-human conflicts - that's what they're called - are really dangerous. After hippos, elephants kill more humans in Africa than any other animal. And Donzani remembers one horrible day before the fence was up that a herd of elephants came stampeding through his village.

DONZANI: I tell you, it was a disaster. Seven people were killed that day.


TEMPLE-RASTON: This is where poaching can get complicated. It often begins with locals slipping into the park just to hunt meat to eat - a Thomson's gazelle here, a water buck there. And then it's gales up. Criminal syndicates move in and recruit locals to help them find elephants or a black rhino. And the financial incentive is hard to resist. The tusks can fetch literally hundreds of thousands of dollars apiece on the black market. The human-elephant conflict plays into this as well. Smugglers and international cartels will often find local farmers who have just had their crops destroyed and then ask them for help to kill an elephant. They will say they heard they just lost everything. But if you help us find an elephant, we'll give you enough money to buy food for your family. It's easy to see where the incentive might come from.


TEMPLE-RASTON: This is actual sound from a video of an elephant charging people in Liwonde. If someone tries to shoo an elephant off their fields, this is what they face.


TEMPLE-RASTON: And being killed by an elephant is a horrifying thing. An adult female elephant weighs over three tons; a bull elephant is typically over six - park manager Craig Reid.

REID: These big bulls will charge a person. And if they manage to catch them, which is not that difficult considering that an elephant can run about 60 kilometers an hour, which is...

TEMPLE-RASTON: That's about twice as fast as the fastest human being.

REID: Most people are relatively catchable. And when people are caught by an elephant, they're normally - using their trunk, knock them to the ground and then kneel on them, crushing them either with their knees or with the base of their trunk, which, at that point, is all curled up and almost like a fist.

TEMPLE-RASTON: Lawrence Munro remembers that just a few years ago, elephants were leaving the park all the time, and locals would call in sightings.

MUNRO: They would see elephants - repeat offenders, so to speak. They can identify the animals by a nick in their ear or whatever the case is. And then they would tell us - they'd say, that elephant there breaks out often. Soon as it's got a collar on it, we can watch it from here. And then we can make informed decisions regarding that elephant. We don't want elephants outside the park. They damage crops. They injure people. We want those people on our side.

TEMPLE-RASTON: Because EarthRanger is tracking the elephants, it's easier to chase them away from the fence line before they get out of the park. And that has already started changing their behavior. They used to break out of the park any time they felt like it. Now they do it at night, when the ranger staff might be smaller. And they're back in the park by daylight.


TEMPLE-RASTON: So how is EarthRanger doing? In the past two years, poaching in Liwonde has plummeted. The park hasn't lost a single high-value animal in 30 months. Those elephant trips outside the park - those are also way down. And while all this has been going on, EarthRanger's machine learning algorithm has been training. By the end of the year, Nrisimha thinks the program will have ingested enough data to start what they call pre-bang enforcement, something conservationists think will change poaching as we know it.

NRISIMHA: Post-bang is you see a poacher come in and fired a shot, and then you hear it maybe because you have some sensors there, or maybe one of your camera traps on the field saw a poacher come in.

TEMPLE-RASTON: You remember Nrisimha, the elephant lover who works for the company that created EarthRanger.

NRISIMHA: So that's post-bang. You might be successful in that case to intercept the poacher, but the animal would have been hurt.

TEMPLE-RASTON: Pre-bang is just like it sounds. It's about using behavioral analysis, artificial intelligence and predictive analytics to intercept poachers before they even get into the park. Among other things, EarthRanger's program will automatically generate the patrol routes that we heard Munro and Chidyera putting together in the control room.

And instead of just focusing on places where there's already been criminal activity, the routes will be anticipated based on patterns EarthRanger's recognized. It will allow rangers to intercept bad guys pre-bang. Poachers will go to where they believe the elephants will be, and they'll find rangers there instead.

NRISIMHA: The biggest difficulty here is really gathering up that data to train up that artificial intelligence model to know what to predict.

TEMPLE-RASTON: It takes a long time for AI to learn. Now that EarthRanger has been deployed in Liwonde for two years, it's getting close to having enough data to start making predictions.

NRISIMHA: If I had to give a number, I would say in about a year or two, we would start seeing solid results and parks actually using this data to help manage their parks or manage their patrols.

TEMPLE-RASTON: But one thing that always looms over these high-tech solutions - what if they get hacked? One of the most vulnerable technologies is this.


TEMPLE-RASTON: Don't adjust your radio. That's actually something called radio telemetry at work. The tracking collars they use to keep tabs on animals - the transmitter they use often sends out a signal in the form of a radio wave. And you use a receiver to zero-in on that signal.

REID: Andrea is just flicking through the various frequencies. And she's now trying to get a response from one of the collars.

TEMPLE-RASTON: That's Craig Reid again. And his wife, Andrea, is in charge of special projects there, including keeping track of the animals.

ANDREA REID (PROJECT MANAGER, NATIONAL PARK): A lot of the collars actually have different beeps to them.

TEMPLE-RASTON: The receiver has a directional antenna. And then all you do is listen.

REID: So on the telemetry device here, you have a button that's called the gain. And when you turn the gain down and the beep is quite loud, it means that you're getting closer to the animal.


TEMPLE-RASTON: The problem is hacking a transponder is fairly easy. Poachers are known to buy their own antennas and receivers and then do exactly what Andrea is doing. You can imagine, if a rhino horn or elephant tusk can fetch hundreds of thousands of dollars on the black market, there's a huge incentive for crime syndicates to come up with creative ways to locate high-value animals.

REID: There's evidence of poachers now actually using the same technologies which are used by conservation officials. In a sense, we're involved in a bit of an arms race with poachers and poaching syndicates.

TEMPLE-RASTON: From NPR, this is I'LL BE SEEING YOU, a four-part series about the technologies that watch us. I'm Dina Temple-Raston. And on today's show, we're looking at how AI is helping the elephant. Coming up, what if all this technology was hacked?

DAWN SONG (PROFESSOR, UC BERKELEY): Even though deep neural networks has made tremendous advancements, it can be fooled very easily.

TEMPLE-RASTON: Stay with us.


TEMPLE-RASTON: From NPR, this is I'LL BE SEEING YOU, a four-part series about the technologies that watch us. I'm Dina Temple-Raston. And on today's show, we've been looking at how AI has been helping the elephant and the humans who live near them. And with all these high-tech solutions, people are now starting to worry about something else - hacking.



TEMPLE-RASTON: Hello. Are you there?

POPE: Hello. Is that Dina?

TEMPLE-RASTON: It is. Hi. Is that Frank Pope?

POPE: I've only just landed, after using some artificial intelligence to help elephants.

TEMPLE-RASTON: (Laughter).

Frank Pope is the CEO of an organization called Save the Elephants. You might have heard of it. It's been at the forefront of elephant conservation for decades. He was in Kenya when I called and had just finished flying over a park to check on elephants wearing special tracking collars. And I asked him whether Save the Elephants ever worried about hackers.

POPE: That was one of the prime motives for diving into this project.

TEMPLE-RASTON: He means the EarthRanger project we were talking about before. And he says the danger of hacking into tracking collars varies depending on whether you have elephants spread out over a large area or have them all bunched up.

POPE: I've just spent three hours flying around. We think we saw about 400 elephants, and there was one collar among them.

TEMPLE-RASTON: So in Kenya, where Pope is, going to the trouble of hacking into the tracking system doesn't make much sense. But if you're in South Sudan, hacking into tracking collars could help you find a huge number of the national herd. The Sudanese government keeps their elephants in close proximity to each other, so hacking a tracking collar...

POPE: That could be catastrophic. So we have to be careful in some circumstances, but we don't have to be so careful in others.

TEMPLE-RASTON: And Pope said the Save the Elephants tracking system is being run through programs like EarthRanger precisely because of the concern about hacking. To be clear, Pope said he's not just concerned about technology; he's worried about carbon units. You know them as humans. They're the biggest threat to keeping the animals safe.

POPE: The weakness of all these security systems is, of course, the users. If you've got a radio room or an operations control room, the people you have in there have to be trusted, and you have to have layers of security. It's impossible to pay rangers enough to ensure that they won't be tempted by the staggering amounts of money you could get for ratting on a rhino and selling it to them if that rhino moves close to a fence, for example. And that's not about hacking so much as making sure that the right people have access to the data.

TEMPLE-RASTON: That's something Craig Reid at Liwonde National Park in Malawi understood from the start. He's placed closed-circuit television cameras all around the ranger headquarters where the EarthRanger system is running.

REID: That's all part of the process of trying to protect the staff that are working with information and also trying to make sure that the information doesn't leave the control room.

TEMPLE-RASTON: But the dream of AI goes far beyond saving elephants. AI is everywhere. It's starting to read our X-rays and drive our cars. And researchers are just beginning to understand AI and its vulnerabilities. If that sounds familiar, it should. New technologies have always developed this way. Remember the movie "WarGames?"


UNIDENTIFIED PERSON #1 (VOICEOVER): He found the right code word to play the game.

TEMPLE-RASTON: It came out in 1983.


UNIDENTIFIED PERSON #1: But it was the wrong computer.

UNIDENTIFIED ACTOR #1 (ACTOR): (As character) Shall we play a game?

UNIDENTIFIED ACTOR #2 (ACTOR): (As character) How can it ask you that?

MATTHEW BRODERICK (ACTOR): (As David) How about global thermonuclear war?

TEMPLE-RASTON: Matthew Broderick hacks into the U.S. military's threat detection system thinking he's playing a new computer game, and he almost starts a war. Crazy? Well, President Ronald Reagan was rather famously freaked out by the movie, and one of his generals told him that something like that could actually happen. It led to America's first federal law against computer hacking, the Computer Fraud and Abuse Act.


TEMPLE-RASTON: Trying to anticipate how adversaries might hack into technology is something people at the Defense Advanced Research Projects Agency, or DARPA, think a lot about. DARPA has been a kind of high-tech incubator for the military since 1958. Stealth technology - that came out of DARPA. Tank simulators, the M16 rifle - you can thank DARPA for those, too. On the creepier side, it is said to be developing what they call a bio-enhanced soldier, or supersoldier. The thing that DARPA is really famous for, though, is artificial intelligence.


UNIDENTIFIED PERSON #2: Now, back to its very beginning, DARPA has been at the forefront of support, advocacy and leadership in artificial intelligence.

TEMPLE-RASTON: This is from one of their promotional videos. And of course, artificial intelligence led to the driverless car.


UNIDENTIFIED PERSON #3 (DARPA): The DARPA autonomous land vehicle project, which started in 1984, led to...

TEMPLE-RASTON: DARPA's long history with AI is part of the reason why researchers there have been so focused on what can go wrong with it. The agency launched a major project earlier this year to try to make AI more resilient because right now it is alarmingly easy to confuse an AI program.


TEMPLE-RASTON: So I went to DARPA to meet with Hava Siegelmann.

I'm Dina Temple-Raston.


TEMPLE-RASTON: Thank you for making the time.

She's the director of something called the GARD project. Guard stands for Guaranteeing AI Robustness Against Deception. It's a program that's trying to find ways to make artificial intelligence more hack-proof.

SIEGELMANN: So when people started with AI, they were going and asking professionals how do you do this and how do you do that and try to write it as rules.

TEMPLE-RASTON: Siegelmann is originally from Israel, which accounts for her accent, and she spends hours every day looking at how machines can be tricked into doing the wrong thing; not necessarily just going against its programming, but instead being taught something by adversaries that it is not supposed to learn.

SIEGELMANN: When you design your machine learning computer network, you always come with the best data set that you can.

TEMPLE-RASTON: The best data set that you can - so in the case of Peter Wrege, our Elephant Listening Project director, he had all those hours and hours of recordings.


TEMPLE-RASTON: Liwonde had reports rangers had been feeding into EarthRanger.

REID: The red is for checks going in, and that amber color's for checks going out.

TEMPLE-RASTON: But it turns out because AI learns in ways we don't quite understand, its decision-making process is a bit of a black box. It can be easy to fool. So earlier in the show, we talked about a subset of AI called neural networks. And we explained that it was fed millions and millions of data points to begin to understand the plucking sound of a violin.


TEMPLE-RASTON: The tone, the rhythm, the notes - and it learned enough to be able to recognize some characteristics of violins even though they're buried in this...


TEMPLE-RASTON: ...A slightly addictive, happy song.


TEMPLE-RASTON: But let's say we want the neural network to do something a little different. We want it to recognize a specific genre of music. This time, it isn't trained to find its violins, but it's trained to tell us whether what we're hearing is orchestral music or disco.


TEMPLE-RASTON: The neural network went through all its training and decided on maybe a dozen qualities that, in all its analysis, helps it tell orchestral music apart from disco. And maybe one of those things is an oboe. It has looked for an oboe in classical music, and it's always there - hard to find but always there. And then it listened to disco music, and it comes to the conclusion most disco music has some Barry Gibb of the Bee Gees and a host of other things but never an oboe.


TEMPLE-RASTON: The important thing to understand is that the neural network isn't taking in all the music to figure that out. It's just zeroing in on these qualities it learned to identify. And that's the rub. If you actually know what the qualities are that it's looking for, you can fool it. You can bury something in the music that the human ear can't hear but the network might see - something like an oboe carefully placed inside the disco beat. And the network is fooled. Can you hear the oboe we snuck in there?


TEMPLE-RASTON: Know what the AI or neural network is focusing on, tweak that and you're done. Humans don't make that mistake because they take in the totality. AI systems are easy to nudge into making the wrong inference.


TEMPLE-RASTON: One of my favorite examples of this came from computer scientists at Carnegie Mellon University. And it's a fairly basic experiment. The CMU researchers trained a computer to use facial recognition to identify different people. The computer dutifully ingested lots of photographs of the people and identified them correctly every time. But then they put big, colorful glasses on a subject who hadn't had glasses on before, and the computer confidently misidentified him. Fooling AI can be that easy. This is the kind of thing Siegelmann thinks a lot about.

SIEGELMANN: So there are examples on images that if you put dominance patch in an image, then the network will attend to this patch and will tell you...

TEMPLE-RASTON: A dominance patch - think of it as the equivalent of a bright, shiny object that catches your attention or big, bright glasses that get the computer to focus on that instead of the bigger picture. Siegelmann says this could become a problem on the battlefield. She is with DARPA, after all.

SIEGELMANN: So perhaps I'm just imagining. People can put some patch on their clothes...

TEMPLE-RASTON: A patch on their clothes, instead of those glasses the CMU researchers added to one of their subjects - that patch could fool an AI system into thinking someone is a member of your platoon when, in fact, it's the enemy.

I'm here to see Dawn Song on the 34th floor.

So there's a pretty famous experiment about this conducted by UC Berkeley professor Dawn Song.

Wow. This is quite a view.

SONG: Yeah, the view is nice.

TEMPLE-RASTON: We met in her office. And from her conference room, you can see the entire San Francisco Bay.

SONG: You can actually see the Golden Gate from here.

TEMPLE-RASTON: Alcatraz, the Golden Gate - Song is a long way from where she grew up in Northeast China, and one of the things that brought her to this high-rise with a great view of the bay is a short video about AI that went viral, which she made with some of her colleagues at Berkeley.

SONG: Let me start playing the video.

TEMPLE-RASTON: The video doesn't have any sound, and it's less than a minute long. But it has rocked the AI community because it showed just how vulnerable AI and driverless cars can be.

SONG: So first let me explain what you will see in the video. So in the video, you'll see two frames side by side.

TEMPLE-RASTON: Think split screen.

SONG: In both frames, you'll see the vehicle is driving towards the end of the road, where there's a person holding a stop sign.

TEMPLE-RASTON: The split screens are subtitled so you can see how the AI - and specifically something called image classification - is making decisions inside the autonomous car.

SONG: You'll see the prediction given by the image classification system to try to predict what the traffic sign is.

TEMPLE-RASTON: So sort of like the car starting to think, hm, a sign is coming. I'm going to have to make a decision.

SONG: Right.

TEMPLE-RASTON: So the key part of the experiment is that Song put some strategically placed stickers on one of the stop signs. There's one sticker below the S and the other above the O in stop. The other stop sign hasn't been changed. And as the car gets closer to the sign, the subtitles on the screen are showing how the AI system is interpreting what it's seeing. The AI reads the regular sign just fine. But the one with the stickers - it thinks it says, speed limit 45 miles an hour. And the car blows right through the intersection. Two carefully placed stickers was all it took to make a self-driving car run a stop sign.

So you were expecting it to misread the sign. And then it did, and you were happy about that.

SONG: In some sense, yes, because we created the adversarial example. So it is surprising still, given that - how well it worked.

TEMPLE-RASTON: It worked so well that people who were developing driverless cars tapped the brakes, and it helped launch a movement to see how vulnerable AI really is. Now, to be fair, Song's team didn't just throw some stickers onto a sign and fool an AI system. It was a very involved and sophisticated process. They knew exactly how the AI's image classification system worked. They knew which pixels of the sign to manipulate to fool it.

So you might ask, a speed limit sign is a totally different shape than a stop sign; how could two stickers fool the AI system so completely? Song says it's because AI isn't seeing the sign the way we might see a sign with our own eyes. AI sees the sign as a mathematical equation not a shape.

SONG: At the end of the day, the human vision system is also a function.

TEMPLE-RASTON: A function or a calculation.

SONG: But it's a much more sophisticated function. And through the evolution and through learning, the human vision system is working really, really well and has learned to learn the right things. And even though deep neural networks has made tremendous advancements - and for certain vision tasks, it actually has achieved a human level of performance. But it has these issues that it can be fooled very easily.

TEMPLE-RASTON: Song says there have been huge strides in AI, but it's still in its infancy.

SONG: We need to understand that the machine - like, the machine learning system is not - actually, it's not as powerful as what people think. We still have a lot of work to do.

TEMPLE-RASTON: Decades before there's a safe self-driving car?

SONG: We do really need new and more breakthroughs before we can really get there.

TEMPLE-RASTON: So would you ride in a driverless car?

SONG: Not today.


SONG: I mean, I would enjoy having a test drive. But...


TEMPLE-RASTON: Hava Siegelmann over at DARPA has similar reservations.

SIEGELMANN: If you can do things with, like, a stop sign - think that the person, or there's a group, is driving a tank, and they put the particular sticker on it. And because that sticker that has particular color - we think that this tank is actually an ambulance. And immediately, we open the gates to let the ambulance go in.

TEMPLE-RASTON: The reason to study all of this isn't to scare us about AI, although it does that, too. The reason Song and Siegelmann are trying to see the limits of AI is so they can fix it, kind of like old-fashioned hackers who used to call up software companies and let them know about flaws in their coding so they could send out patches.

SIEGELMANN: So the field started a few years ago when people said, wow, there is a really interesting feature of the neural network; that it's really not robust.

TEMPLE-RASTON: And you know that stop sign that launched a thousand doubts about AI? It's now hanging up at the London Museum of Science and Technology. It's part of an exhibit about our driverless future.


UNIDENTIFIED SINGERS (SINGERS): (Singing) We've got to slow down, slow down.

TEMPLE-RASTON: This is from a musical short General Motors produced for an auto expo in 1956, and it's all about the future of driving.


UNIDENTIFIED SINGER #1 (SINGER): (Singing) You can bet your high compression we're going to be late.

TEMPLE-RASTON: And in the video, the car becomes a time machine, taking the four people in the car into the future.


UNIDENTIFIED ACTOR #3 (ACTOR): (As character) Hey, I wonder what we'd hear if I turn on the switch, and we're driving along in 1976.

TEMPLE-RASTON: 1976 - modern machinery. In this case, think "The Jetsons" - a driverless car that looks a lot like a spaceship.


UNIDENTIFIED ACTOR #4 (ACTOR): (As character) Well done, Firebird 2. You're now under automatic control; hands-off steering.

TEMPLE-RASTON: This all goes on for almost nine minutes, and 1976 came and went without a driverless car. And so did 2018. Most experts will tell you that driverless cars are more than a decade away.


UNIDENTIFIED ACTOR #5 (ACTOR): (As character) Here we go on the high-speed safety lane.

TEMPLE-RASTON: The sticking point has always come down to artificial intelligence - using neural networks to identify elephant songs...


TEMPLE-RASTON: ...Or to anticipate where poachers might be...

REID: The old system was a map on the wall.

TEMPLE-RASTON: ...Allows AI to be wrong sometimes without life-threatening consequences. One of the things that researchers are looking at is whether AI can be trained to explain to us how it's making decisions. Until we understand that, researchers will tell you that AI really shouldn't be trusted with the car keys.


UNIDENTIFIED ACTOR #6 (ACTOR): (As character) Ah, this is the life - safe, cool, comfortable. Mind if I smoke a cigar?

UNIDENTIFIED ACTOR #7 (ACTOR): (As character) Oh, not with this wonderful air conditioning.

TEMPLE-RASTON: This is I'LL BE SEEING YOU from NPR. The show was written and hosted by me, Dina Temple-Raston. And our producer is Adelina Lancianese, and she scored the show too. Field production by Michael May. Our theme music is composed by Ramtin Arablouei. And special thanks to NPR investigations, the Story Lab, Kenny Malone, John Keefe of the Quartz AI Studio and Josephine Wolff at Tufts University.


UNIDENTIFIED ACTOR #5: (As character) The safe, easy way to make a turn.

TEMPLE-RASTON: And if you missed one of our previous shows, just go to or find us on NPR One. Next time, we get exclusive access to the U.S. military unit fighting the most secretive and deadly terrorist organization in the world. And their weapon is a computer keyboard.

UNIDENTIFIED PERSON #5: You could take those over. We were going to win everything. It was a house of cards.

UNIDENTIFIED PERSON #6: So within the first 60 minutes of go, I knew we were having success. And that continued, obviously, well into the night.

TEMPLE-RASTON: I'm Dina Temple-Raston, and I'll be seeing you.


UNIDENTIFIED SINGER #2 (SINGER): (Singing) Roger, Firebird, I'll swing you to the right; hands-on steering. Firebird, good night.

Copyright © 2019 NPR. All rights reserved. Visit our website terms of use and permissions pages at for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.