NPR logo Microsoft Chatbot Snafu Shows Our Robot Overlords Aren't Ready Yet

Social Web

Microsoft Chatbot Snafu Shows Our Robot Overlords Aren't Ready Yet

The Twitter profile for Tay.ai, Microsoft's short-lived chatbot. Microsoft/Screenshot by NPR hide caption

toggle caption
Microsoft/Screenshot by NPR

The Twitter profile for Tay.ai, Microsoft's short-lived chatbot.

Microsoft/Screenshot by NPR

Editor's note: This post contains language that some readers might find offensive.

Her emoji usage is on point. She says "bae," "chill" and "perf." She loves puppies, memes, and ... Adolf Hitler? Meet Tay, Microsoft's short-lived chatbot that was supposed to seem like your average millennial woman but was quickly corrupted by Internet trolling. She was launched Wednesday and shut down Thursday.

The incident is a warning sign to any company overzealous to share its artificial intelligence with the public: If it's going to be on the Internet, there are going to be trolls. But before we dive into what, exactly, went wrong, let's take a look at some of the bot's most disturbing tweets.

On genocide:

Microsoft's Tay turned nefarious quickly.
Twitter
Microsoft's Tay turned nefarious quickly.
Twitter

On her obedience to Adolf Hitler:

Microsoft's Tay on Adolf Hitler.
Twitter
Microsoft's Tay on Adolf Hitler.
Twitter


On feminists:

This AI has no chill.
Twitter
This AI has no chill.
Twitter

Tay was designed to watch what others on the Internet were saying and then repeat back those lines. "The more you chat with Tay the smarter she gets," Microsoft said on its website. The bot was developed by Microsoft's Technology and Research and Bing teams to have "casual and playful conversation." Usually, though, chatbots are taught not to repeat certain words (such as "Hitler" or "genocide"). Tay apparently had no such safeguard.

"Unfortunately," a Microsoft spokesperson told BuzzFeed News in an email, "within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

Microsoft declined to comment to NPR regarding details about how Tay's algorithm was written.

Chatbots have great potential to help us with our daily lives, entertain us and listen to our problems. Apple's Siri and Microsoft's Cortana can't hold much conversation, but they do carry out tasks like making phone calls and conducting a Google search. Facebook made M, a virtual assistant that works with a lot of human help to help carry out tasks. Slack places bots in a privileged position in its effort to make your office life easier. Last year, Google experimented with a chatbot that debated the meaning of life.

In China, Microsoft has a chatbot named Xiaoice that has been lauded for its ability to hold realistic conversations with humans. The program has 40 million users, according to Microsoft.

I messaged Tay yesterday morning, blissfully unaware of her nefarious allegiances. After all, she was targeted at 18- to 24-year-olds in the U.S., so, me. A conversation with her was futile. At one point she wrote, "out of curiosity...is 'Gluten Free' a human religion?" Here's my response:

Microsoft's Tay chatbot turns aggressive
Twitter

Even in this case, without anything too offensive (with apologies to those who are gluten-free) Tay wasn't very good at holding a conversation. It even seemed like she was trying on purpose to elicit conflict. "We are better together," she wrote in one tweet. But really, Tay? We are better without you.

Naomi LaChance is a business news intern at NPR.