From Hate Speech To Fake News: The Facebook Content Crisis Facing Mark Zuckerberg : All Tech Considered The Facebook chief has an army of subcontractors making editorial judgments about millions of pieces of content — like a media company. But the rules they operate by are complex and contradictory.
NPR logo

From Hate Speech To Fake News: The Content Crisis Facing Mark Zuckerberg

  • Download
  • <iframe src="https://www.npr.org/player/embed/495827410/502402451" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
From Hate Speech To Fake News: The Content Crisis Facing Mark Zuckerberg

From Hate Speech To Fake News: The Content Crisis Facing Mark Zuckerberg

  • Download
  • <iframe src="https://www.npr.org/player/embed/495827410/502402451" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

STEVE INSKEEP, HOST:

We have some inside information on how Facebook works - specifically, its information about how Facebook regulates what you see, everything from hate speech to fake news stories and more. It's all in the news because of concerns that fake news stories may have influenced voters in last week's election. But it's been an issue for Facebook for quite some time. NPR's Aarti Shahani learned how Facebook does what it does, so we got her on the line for what is not a fake news story.

AARTI SHAHANI, BYLINE: Hi. How are you?

INSKEEP: So how big is this? How much bigger is this than just fake news?

SHAHANI: It's way bigger than fake news, and it's something that we're seeing over and over and over again with a different controversy of the moment. So for example, over the summer, you'll recall, there were these three high-profile shootings, right, against police officers and against two black men, one of whom bled to death on Facebook Live. At that point during the summer, employees in the company told me that users are flagging each other's posts left and right. That is, users can alert Facebook to a post and say, hey, this violates your rules; I want this taken down.

INSKEEP: The rules having to do with what's hate speech and what's not, what's decent and what's not, that sort of thing.

SHAHANI: Yeah, and not just hate speech but, for example, what's nudity, what's sexist. It's actually quite strict.

INSKEEP: And then the question you sought to answer was, how do they actually enforce those rules, given the complexity of all the different kinds of things that are put online? What did you find?

SHAHANI: So Facebook's head of global policy, a woman named Monika Bickert, said that any time a post is flagged, her staff comes together and puts a lot of thought into the decisions about what stays up and what comes down, but they deeply consider the context of each post. She used that word over and over again to suggest we are really thoughtful, and we stand by our decisions.

INSKEEP: So it sounds pretty good, but what did you find about what they actually do?

SHAHANI: The truth is, Facebook actually has an entire army of subcontractors out in Warsaw and in Manila - the Philippines. And because of privacy laws as well as technical glitches, these subcontractors can't even see the entire post that they're looking at. And they're pressured to work at an extremely fast rate - about one post every 10 seconds.

INSKEEP: So the people who are supposed to be figuring this out and monitoring hate speech and other kinds of offensive speech have to decide in 10 seconds, based on hardly any information?

SHAHANI: They have to decide quickly, and they are in a work environment that encourages them to go at lightning speed. Let's just say they have a regular shift at the rate of one post every 10 seconds. That means each person is clearing about 3,000 posts a day. That's very different from thoughtful, slow and precise.

INSKEEP: OK, so how well is it working?

SHAHANI: Well, you know, we ran a little experiment, right? We ended up flagging about 200 posts that could be considered hate speech against blacks and against whites. And what we found is that Facebook makes a whole lot of mistakes. For example, we flagged a post that calls for the killing of cops, and they totally missed that. It's a call to violence, and they said it was permissible speech. In dozens of instances, this happened and Facebook had to change its mind on its decision. So lots of mistakes, you know, not a whole lot of confidence in the system.

INSKEEP: Does Facebook not really want to be in this business, not really want to be making editorial choices for their consumers?

SHAHANI: Well, you know, I think they're ambivalent. Basically, Mark Zuckerberg at age 19 starts this company. He's describes it as a technology company just connecting people. Then he makes all these very strategic moves to make Facebook the thing through which you consume the news, the thing through which you have public discourse. And it also has to be a safe space where people don't feel threatened. So it's getting very, very messy. And he is clearly now the CEO of a media company, and it's not clear that he has the core competency for it.

INSKEEP: Aarti, thanks for your reporting.

SHAHANI: Thank you.

INSKEEP: That's NPR's Aarti Shahani. Now, a Facebook spokesperson got back to us, telling NPR that their subcontractors are put through rigorous quality controls. And as for that calculation of less than 10 seconds to examine each flagged post, Facebook disputes it, saying its employees' numbers are off, though the company did not offer its own numbers

Copyright © 2016 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.