Internet Trolls Turn A Computer Into A Nazi Microsoft's artificial intelligence chatbot was supposed to mimic a teenage girl. Instead, internet trolls tricked it into spouting hate speech. BuzzFeed tech reporter Alex Kantrowitz explains how.

Internet Trolls Turn A Computer Into A Nazi

Internet Trolls Turn A Computer Into A Nazi

  • Download
  • <iframe src="https://www.npr.org/player/embed/472067221/472067222" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

Microsoft's artificial intelligence chatbot was supposed to mimic a teenage girl. Instead, internet trolls tricked it into spouting hate speech. BuzzFeed tech reporter Alex Kantrowitz explains how.

DANIEL ZWERDLING, HOST:

And now we have another story that shows how humans can make computers run amok. Microsoft unveiled its latest version of artificial intelligence last week. It's a kind of software, kind of like Siri and Apple's iPhones or like M on Facebook, except Microsoft designed their software with a different goal. They named her Tay, and they designed her to tweet and engage people on other social media pretty much like a 19-year-old girl might do it. But Tay developed a mind of her own - sort of. And she became a hateful racist monster. We reached out Alex Kantrowitz. He's a tech reporter with BuzzFeed News. And we asked him do these bots work?

ALEX KANTROWITZ: So this is one of the more fascinating things about artificial intelligence. The more data it ingests, the smarter it becomes. And then it's supposed to be able to learn unsupervised, so without a programmer hovering over them. And so as people started programming more and more terrible things in to Tay, it started to take on that personality.

ZWERDLING: OK, so now I have an iPhone, and I say to Siri - you know, I ask her outrageous questions just to laugh at her answers. So how was Tay different?

KANTROWITZ: So Tay is different because unlike Siri and maybe Facebook's M, those two other virtual systems, their purpose is to help you get things done or find something out. Tay wanted to engage its users, make them feel like they're having a good time and so had to be designed with significantly more personality to achieve its goal.

What happened with Tay was that Microsoft programmed it with a repeat-after-me game. So you could get Tay to repeat anything after you. So people who got frustrated trying to get Tay to answer questions with, you know, terrible bigoted undertones ended up saying, so why don't we just have it repeat after us? And then that's how some of the most awful things that Tay said ended up getting put out there.

ZWERDLING: Speaking of awful, we tried to find some tweets that showed the racist ugly things that Tay was saying to people. And we can't find one that we can even, you know, play with beeps on the air. But can you characterize them without being too vile?

KANTROWITZ: There are many denying the Holocaust, many calling for genocide. There's pictures of Hitler saying swag alert. They run across the board and are all pretty horrific.

ZWERDLING: You know, we tried to engage with Tay in the social media world, and she's appeared. Microsoft has yanked her (laughter) - you know, you yanked her off. What do you think is the moral of this whole episode?

KANTROWITZ: So I think there are two morals. One is if you release a bot on Twitter, you can never underestimate how terrible some of the folks on that platform are going to be. Second moral is when you do release a bot, you've got to make you have some filters on it. Make sure it doesn't say heil Hitler. Make sure it doesn't use racial slurs. And that should put you in a better place than Microsoft found it in - itself in this week.

ZWERDLING: Alex Kantrowitz is a tech reporter for BuzzFeed News. Thanks so much for joining us today.

KANTROWITZ: Thanks for having me.

Copyright © 2016 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Microsoft Chatbot Snafu Shows Our Robot Overlords Aren't Ready Yet

The Twitter profile for Tay.ai, Microsoft's short-lived chatbot. Microsoft/Screenshot by NPR hide caption

toggle caption
Microsoft/Screenshot by NPR

The Twitter profile for Tay.ai, Microsoft's short-lived chatbot.

Microsoft/Screenshot by NPR

Editor's note: This post contains language that some readers might find offensive.

Her emoji usage is on point. She says "bae," "chill" and "perf." She loves puppies, memes, and ... Adolf Hitler? Meet Tay, Microsoft's short-lived chatbot that was supposed to seem like your average millennial woman but was quickly corrupted by Internet trolling. She was launched Wednesday and shut down Thursday.

The incident is a warning sign to any company overzealous to share its artificial intelligence with the public: If it's going to be on the Internet, there are going to be trolls. But before we dive into what, exactly, went wrong, let's take a look at some of the bot's most disturbing tweets.

On genocide:

Twitter
Microsoft&#039;s Tay turned nefarious quickly.
Twitter

On her obedience to Adolf Hitler:

Twitter
Microsoft&#039;s Tay on Adolf Hitler.
Twitter


On feminists:

Twitter
This AI has no chill.
Twitter

Tay was designed to watch what others on the Internet were saying and then repeat back those lines. "The more you chat with Tay the smarter she gets," Microsoft said on its website. The bot was developed by Microsoft's Technology and Research and Bing teams to have "casual and playful conversation." Usually, though, chatbots are taught not to repeat certain words (such as "Hitler" or "genocide"). Tay apparently had no such safeguard.

"Unfortunately," a Microsoft spokesperson told BuzzFeed News in an email, "within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

Microsoft declined to comment to NPR regarding details about how Tay's algorithm was written.

Chatbots have great potential to help us with our daily lives, entertain us and listen to our problems. Apple's Siri and Microsoft's Cortana can't hold much conversation, but they do carry out tasks like making phone calls and conducting a Google search. Facebook made M, a virtual assistant that works with a lot of human help to help carry out tasks. Slack places bots in a privileged position in its effort to make your office life easier. Last year, Google experimented with a chatbot that debated the meaning of life.

In China, Microsoft has a chatbot named Xiaoice that has been lauded for its ability to hold realistic conversations with humans. The program has 40 million users, according to Microsoft.

I messaged Tay yesterday morning, blissfully unaware of her nefarious allegiances. After all, she was targeted at 18- to 24-year-olds in the U.S., so, me. A conversation with her was futile. At one point she wrote, "out of curiosity...is 'Gluten Free' a human religion?" Here's my response:

Microsoft's Tay chatbot turns aggressive
Twitter

Even in this case, without anything too offensive (with apologies to those who are gluten-free) Tay wasn't very good at holding a conversation. It even seemed like she was trying on purpose to elicit conflict. "We are better together," she wrote in one tweet. But really, Tay? We are better without you.

Naomi LaChance is a business news intern at NPR.