Is A Threat On Facebook Real? Supreme Court Will Weigh In : All Tech Considered The Supreme Court has agreed to hear a case involving perceived death threats on Facebook. The court and the company could have starkly different approaches to identifying credible threats.
NPR logo

Is A Threat On Facebook Real? Supreme Court Will Weigh In

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Is A Threat On Facebook Real? Supreme Court Will Weigh In

Is A Threat On Facebook Real? Supreme Court Will Weigh In

  • Download
  • <iframe src="" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript


This is MORNING EDITION from NPR News. I'm Linda Wertheimer.


And I'm Renee Montagne. The Supreme Court, this week, agreed to hear a case about a man who threatened, on Facebook, to kill his wife. He was arrested and tried. His lawyers say that he never intended to do it. He was just venting after a bad breakup. As the nation's top court considers his actions, Facebook executives are also puzzling over how to deal with threatening speech on the social media platform. NPR's Aarti Shahani has more.

AARTI SHAHANI, BYLINE: Anthony Elonis got dumped and lost his job.

D'OVIDIO: He took to Facebook as a form of - as, what he says - a form of therapy.

SHAHANI: Criminologist Rob D'Ovidio at Drexel University is following the case. And by therapy, he means threats, which Elonis made repeatedly on Facebook to his ex, to law enforcement and, as life kept falling apart, to an unspecified elementary school.

D'OVIDIO: Elonis was sentenced to 44 months in prison and three years of supervised community release.

SHAHANI: Elonis's wife said she felt scared. His defense says the graphic language was a joke. Take this one post, D'Ovidio reads out loud.

D'OVIDIO: (Reading) Did you know that it's illegal for me to say I want to kill my wife?

SHAHANI: Elonis claims he lifted the lines from a comedy called, "The Whitest Kids U' Know."


TREVOR MOORE: Hi, I'm Trevor Moore. Did you know that it's illegal to say I want to kill the President of the United States of America?

SHAHANI: The Supreme Court will consider if Elonis's language was a true threat, which the lower court defined as speech that is so clearly objectionable, any objective listener could be scared. Meanwhile, the company, Facebook, has already decided that keywords are not an effective way to look for threats on the site.

ARTURO BEJAR: Especially the things that get more reported for the more intense reasons, are things that - you look at the text, and it's like, I had no idea from looking at this text that this was going on.

SHAHANI: Arturo Bejar is director of engineering at Facebook. While the platform has hard and fast rules against porn, it does not forbid specific violent words. While algorithms crawl through the site in search of our deepest consumer demands, there's no algorithm looking for credible threats. That's because, Bejar says...

BEJAR: Intent and perception really matter.

SHAHANI: Bejar's little-known section of the Facebook machine works on conflict resolution. He's gone to leading universities and recruited experts in linguistics and compassion research. Together, they field users complaints about posts at a massive scale.

BEJAR: We facilitate approximately four million conversations a week.

SHAHANI: By conversation, he really does mean getting people to communicate directly with each other - not just complain anonymously. It's couples therapy light for the social media age, and it turns out, a button that says report, is a real conversation killer.

BEJAR: We were talking to teenagers, and it turns out that they didn't clicking on report because they were worried that they would get a friend in trouble.

SHAHANI: When his team changed it to a softer phrase - this post is a problem - complaints shot up. They also revamped the automated form so that the person complaining names the recipient and the emotion that got triggered.

BEJAR: Hey Aarti, this photo that you shared of me is embarrassing to me.

SHAHANI: More people started to complain. And according to the data, the word embarrassing really works.

BEJAR: There's a 83 to 85 percent likelihood that the person receiving the message is going to reply back or take down the photo.

SHAHANI: Facebook has several hundred employees around the world who can step in when the automated tools fail, and threat detection is clearly a work in progress. Consider two cases. In the first, Facebook user Sarah Lebsack complained about a picture that a friend posted of his naked butt.

SARAH LEBSACK: It wasn't the most attractive rear end I've ever seen, but also just not what I wanted to see as I browse Facebook.

SHAHANI: And how long did it take to - for them to take the picture down?

LEBSACK: Oh, not long at all, it was maybe a couple of hours.

SHAHANI: User Francesca Sam-Sin complained about a post that she says put her safety at risk. She recently had flowers delivered to her mom after a surgery.

FRANCESCA SAM-SIN: So she posted a picture of the flowers, and the card had my full name, my address and my cell phone number on it. And it was open to the public. It wasn't just limited to her friends.

SHAHANI: Sam-Sin says her mom wouldn't delete the post because she wanted show off the bouquet, and Facebook wouldn't get involved in family matters. Aarti Shahani, NPR News.

Copyright © 2014 NPR. All rights reserved. Visit our website terms of use and permissions pages at for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.