Adversarial AI - Fooling Artificial Intelligence : Short Wave Artificial intelligence might not be as smart as we think. University and military researchers are studying how attackers could hack into AI systems by exploiting how these systems learn. It's known as "adversarial AI." In this encore episode, Dina Temple-Raston tells us that some of these experiments use seemingly simple techniques.

For more, check out Dina's special series, I'll Be Seeing You.

Email the show at shortwave@npr.org.
NPR logo

How Hackers Could Fool Artificial Intelligence

  • Download
  • <iframe src="https://www.npr.org/player/embed/914354425/914563954" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
How Hackers Could Fool Artificial Intelligence

How Hackers Could Fool Artificial Intelligence

How Hackers Could Fool Artificial Intelligence

  • Download
  • <iframe src="https://www.npr.org/player/embed/914354425/914563954" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
Internet Security
Andriy Onufriyenko/Getty Images

Artificial intelligence might not be as smart as we think. University and military researchers are studying how attackers could hack into AI systems by exploiting how these systems learn. It's known as "adversarial AI." In this encore episode, Dina Temple-Raston tells us that some of these experiments use seemingly simple techniques.

For more, check out Dina's special series, I'll Be Seeing You.

Email the show at shortwave@npr.org.

This episode was produced by Rebecca Ramirez and edited by Viet Le.