Here's how Israel is using artificial intelligence to find targets in Gaza. Israel's military says the system makes it more efficient and reduces collateral damage. Critics see a host of problems with the nation's use of AI, but other militaries will likely follow suit.

Israel is using an AI system to find targets in Gaza. Experts say it's just the start

  • Download
  • <iframe src="https://www.npr.org/player/embed/1218643254/1219247041" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

STEVE INSKEEP, HOST:

Israel's military says it struck 250 targets in Gaza yesterday - 250 targets in a single day. In months of war, Israeli forces have hit tens of thousands of buildings in Gaza in its campaign to wipe out Hamas. This is what President Biden called indiscriminate bombing the other day, although Israeli officials insist they are carefully choosing targets, often using artificial intelligence. NPR's Geoff Brumfiel has been looking into that part of the story. Geoff, good morning.

GEOFF BRUMFIEL, BYLINE: Good morning.

INSKEEP: How does the Israeli system work?

BRUMFIEL: The system is called the Gospel. And basically, it takes an enormous quantity of surveillance data, crunches it all together and makes recommendations about where the military should strike. I spoke to Tal Mimran, a lecturer at Hebrew University who has worked in the Israeli military as a legal advisor on targeting, and here's what he said about it.

TAL MIMRAN: So basically, Gospel imitates what a group of intelligence officers used to do in the past.

BRUMFIEL: Now, there are still people in the loop. The Israeli military says all recommendations are reviewed by human analysts. But when it comes to generating targets, the system seems to be quite speedy. They used Gospel in a similar conflict with Hamas in 2021, and it produced 200 targets on relatively short notice. That's something Mimran says human analysts would have struggled to do.

INSKEEP: When you talk about an enormous quantity of surveillance data, you're making me imagine you could monitor people's cell phone locations in conversations. You might have human sources on the ground. You might have past information from past incursions into Gaza. Am I getting this about right?

BRUMFIEL: Yeah, that sounds about right. The AI probably works on all this stuff, including drone footage as well or satellite images.

INSKEEP: Oh, OK. Does it work?

BRUMFIEL: Well, this is, you know, the tough question to answer. Researchers agree that AI is good at sorting through all this data, but there's disagreement about whether it can really deliver targeting results. That comes down to training, and we know Israel's AI has had training problems in the past. For example, intelligence services kept records of everything they classified as a target, but they didn't keep records of things that weren't targets. Now, you'd want to train your AI so it could learn from both. Heidy Khlaaf is an AI expert at a company called Trail of Bits. She's very critical of the idea of putting AI in charge of targeting, in part because she thinks the training data just won't be good enough to tell a Hamas fighter from a civilian.

HEIDY KHLAAF: You're ultimately looking at imprecise and biased target automation that's really not far from indiscriminate targeting.

INSKEEP: Oh, and she just used that word, indiscriminate, which is the same word that President Biden is using this week. Why is there so much damage, I guess is a fair question? Why would there be so much damage in Gaza if AI is that precise?

BRUMFIEL: Yeah, there's sort of two possible answers in my mind. I mean, either the military's AI targeting isn't working very well, or the Israeli military has decided it's OK to do more damage and kill more civilians to reach its goals. And, you know, this brings up another really interesting issue with using AI in this way. These machine algorithms train themselves. And Khlaaf says that kind of dilutes the blame if things go wrong.

KHLAAF: It then becomes impossible to trace decisions to specific design points that can hold any individuals or military accountable for an AI's actions.

BRUMFIEL: Now, others I spoke to say the legal landscape doesn't change. Commanders are still responsible, but we really don't know how this would play out if there were, say, a war crimes trial over a target an AI had chosen.

INSKEEP: OK. NPR's Geoff Brumfiel, thanks for the insights.

BRUMFIEL: Thank you.

Copyright © 2023 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.