STEVE INSKEEP, HOST:
Almost by definition, many tech workers are among those who can work from home. Many are already in front of screens already working on the Internet. But one tech job at companies like Facebook, YouTube and Twitter is hard to do at home - moderating harmful content. Workers must do that while preserving privacy and security and maybe not in front of their families in the living room. NPR tech correspondent Shannon Bond reports on a solution - artificial intelligence.
SHANNON BOND, BYLINE: The tech companies have been saying for years that they want computers to take on more content moderation. For one thing, they're faster than human reviewers, and they won't be traumatized by graphic violence or disturbing content. The pandemic has accelerated that transition. Graham Brookie is director of the Atlantic Council's Digital Forensic Research Lab, which tracks online disinformation.
GRAHAM BROOKIE: We're seeing that play out in real time at a scale that I think a lot of the companies probably didn't expect at all.
BOND: So what does this mean for what's showing up in your social media feeds? The companies themselves are warning there could be mistakes. Here's Facebook CEO Mark Zuckerberg on a recent call with reporters.
(SOUNDBITE OF ARCHIVED RECORDING)
MARK ZUCKERBERG: We may be a little less effective in the near term while we're adjusting to this.
BOND: That means some posts or videos might be incorrectly removed and others that should come down may be left up. At Facebook, humans are still reviewing the most difficult material, like posts about suicide and self-harm, terrorism and child exploitation. Many moderators are contractors, not full-time employees. But Facebook is shifting that work to employees so contractors can stay home. The platforms are grappling with how to get the critical work of moderation done as the volume of posts they have to review is skyrocketing. Graham Brookie says that's creating pressure.
BROOKIE: They are dealing with more information with less staff, which is why you've seen these decisions to move to more automated systems because, frankly, there's not enough people to look at the amount of information that's ongoing.
BOND: That includes false information about the pandemic, including bogus cures and harmful fake treatments. The World Health Organization calls the situation an infodemic where too much information, both true and false, makes it hard for people to find sources they can trust. Brookie says that makes the platform's decisions about what people are allowed to say even more important right now.
BROOKIE: I think that we should all rely on more moderation rather than less moderation in order to make sure that the vast majority of people are connecting with objective, science-based facts.
BOND: Some Facebook users are raising alarms that automated review is already causing problems. When they tried to post links to mainstream news sites like The Atlantic and BuzzFeed, they got notifications that the posts were spam. Facebook said that was an error because of a glitch in its automated spam filter. Zuckerberg said it was unrelated to the change in content moderation. Shannon Bond, NPR News, San Francisco.