Murder Video Again Raises Questions About How Facebook Handles Content : All Tech Considered A murder video uploaded to Facebook brought attention to how the company monitors upsetting content. Artificial intelligence may be a solution, but experts say it could bring unwanted results.
NPR logo

Murder Video Again Raises Questions About How Facebook Handles Content

  • Download
  • <iframe src="https://www.npr.org/player/embed/525042474/525110204" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Murder Video Again Raises Questions About How Facebook Handles Content

Murder Video Again Raises Questions About How Facebook Handles Content

  • Download
  • <iframe src="https://www.npr.org/player/embed/525042474/525110204" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

ROBERT SIEGEL, HOST:

There was outrage on social media and elsewhere this week after a video of a murder in Cleveland remained on Facebook for two hours before the company took it down. As NPR's Laura Sydell reports, it's an example of an ongoing problem for the world's largest social network.

LAURA SYDELL, BYLINE: Facebook CEO Mark Zuckerberg made a contrite statement about the Cleveland murder of Robert Godwin Jr. (ph) earlier this week at the company's developers conference.

(SOUNDBITE OF ARCHIVED RECORDING)

MARK ZUCKERBERG: Our hearts go out to the family and friends of Robert Godwin Sr. And we have a lot of work. And we will keep doing all we can to prevent tragedies like this from happening.

SYDELL: But what more can Facebook do? It's in a complicated position. On the one hand, its users want to be free to express themselves. And yet, they do want some protection. Daphne Keller is a law professor at Stanford University.

DAPHNE KELLER: Half the time it's oh no, Facebook didn't take something down and we think that's terrible. They should have taken it down. And the other half of the time is oh no, Facebook took something down and we wish they hadn't.

SYDELL: For example, last year, there was outrage when Facebook took down a post of an iconic Vietnam War photo of a naked girl running from a napalm attack. And Keller says Facebook isn't actually under any legal obligation to take down a video of a crime.

Keller says society isn't sure yet whether Facebook should be like the phone company, which isn't responsible for what people say, or if it should be like a traditional broadcaster, where there are strict regulations on what can be put on the air.

KELLER: And I think Facebook isn't really exactly like either of those two things. And that makes it hard as a society to figure out what it is we do want them to do.

SYDELL: At this point, nearly 2 billion people use Facebook. And more than a hundred million hours of video are watched on the platform every day. Media outlets, including NPR, upload videos and Facebook pays for those. There may simply be limits on what the company can do to monitor so much video. Facebook has three ways of monitoring content.

There's the users, like the ones who flagged the murder videos from Cleveland. Facebook does have human editors who evaluate flag content. And there's artificial intelligence which can monitor enormous amounts of content. But it has its limits, says Nick Feamster, a professor of computer science at Princeton University. Take that iconic naked girl photo from Vietnam.

NICK FEAMSTER: Can we detect a nude image? That's something that an algorithm is pretty good at. Does the algorithm know context and history? That's a much more difficult problem.

SYDELL: Feamster says it's not a problem that's likely to be solved anytime soon. However, Feamster, who spoke over Skype, says artificial intelligence might be able to notice signs of a troublesome account. It's sort of like the way a bank assesses credit ratings.

FEAMSTER: Over time, you might learn a so-called prior probability that suggests that maybe this user is more likely to be bad or more likely to be posting inappropriate or unwanted content.

SYDELL: So Facebook would keep a closer eye on that account. Between artificial intelligence and more human monitoring, it might be possible to stop the posting of criminal videos and hate speech. But Stanford's Keller wonders if that's really what we want.

KELLER: Do we want one of our key platforms for communication with each other to have built-in surveillance and monitoring for illegal activity and somebody deciding when what we said is inappropriate and cutting it off? That's kind of a dystopian policy direction as far as I'm concerned.

SYDELL: Keller is willing to make a prediction. Very soon, someone else will upload another video that forces the country to be asking these same questions again. Laura Sydell, NPR News.

Copyright © 2017 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.