DAVID GREENE, HOST:
After the attacks in San Bernardino and Paris, social media platforms have come under pressure. Lawmakers want them to do more to take down messages and videos intended to recruit militants or that serve as propaganda. NPR's Brian Naylor reports.
BRIAN NAYLOR, BYLINE: Videos seeking to glorify groups like ISIS abound on the Internet. The Middle East Media Research Institute's TV Monitor Project has a collection. Here's one in English.
(SOUNDBITE OF VIDEO)
UNIDENTIFIED MAN: We are men honored with Islam who climbed its peaks to perform jihad, answering the call to unite under one flag.
NAYLOR: Messages threatening or promoting terrorism violate the usage rules of most social media platforms. The platforms rely on their users to help flag inappropriate content. Nicole Wong is a former executive with Twitter and Google who also served as President Obama's deputy chief technology officer.
NICOLE WONG: When you're a service that has, as YouTube does, more than 300 hours of video uploaded every minute, or, as Facebook does, more than 250,000 photos uploaded every minute, it's really hard to be able to make the proper decisions behind taking stuff down.
NAYLOR: Some content - say, a video showing a beheading - is obviously offensive and an easy call to remove. But Wong says things like military training videos on YouTube are more difficult.
WONG: Some of the videos taken by our servicemen in Afghanistan look surprisingly similar to videos taken by the PKK, which is a designated terrorist organization in Turkey.
NAYLOR: Some have suggested social media sites weed out terrorist posts with the same kinds of sophisticated programs that help them identify images of child pornography - by comparing them to a national database. But videos are constantly changing. A measure in the Senate would require Internet companies to report knowledge of terrorist activities to the government. Emma Llanso of the Center for Democracy and Technology says the proposal is pretty vague.
EMMA LLANSO: The bill does not define what terrorist activity is, but it does create this obligation for companies that, you know, would carry some risk if they failed to comply.
NAYLOR: Llanso says the legislation could force social media platforms to send all sorts of personal information about their users to the governments. Denise Zheng of the Center for Strategic and International Studies says she believes social media could be more proactive when it comes to taking down problem posts. But she says legislation could dry up important tips for law enforcement.
DENISE ZHENG: A lot of these individuals are actually identified using, you know, intelligence collection capabilities that monitor online behavior. So there are certainly intelligence interests, and we wouldn't want to hamper our efforts to identify ISIS militants and to take action against them.
NAYLOR: And there's a risk that demands here for companies to comply with the government legitimize online censorship in places like China. Nicole Wong, the former White House and social media official, says it's a dilemma.
WONG: We have designed and thrived on an open Internet. And we need to figure out ways to keep those communications channels open, even as more people who we disagree with are getting on board and using these same platforms.
NAYLOR: But with each attack, the pressure on social media companies to do more ratchets up. Brian Naylor, NPR News, Washington.
NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.