Facebook Increasingly Reliant on A.I. To Predict Suicide Risk Ten times a day, on average, Facebook's AI-driven self-harm detection system alerts authorities to people who may be about to hurt themselves.
NPR logo

Facebook Increasingly Reliant on A.I. To Predict Suicide Risk

  • Download
  • <iframe src="https://www.npr.org/player/embed/668408122/669090030" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Facebook Increasingly Reliant on A.I. To Predict Suicide Risk

Facebook Increasingly Reliant on A.I. To Predict Suicide Risk

  • Download
  • <iframe src="https://www.npr.org/player/embed/668408122/669090030" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

LAKSHMI SINGH, HOST:

For the last year, Facebook has been running a new system that automatically scans people's accounts for signs of suicide risk and alerts the police. As NPR's Martin Kaste reports, it raises new questions about social media companies intervening in the real-world lives of their customers.

MARTIN KASTE, BYLINE: Facebook's using artificial intelligence to find cases of people who seem about to harm themselves. The AI is learning which kinds of online chatter it should take seriously. For instance, if a person is streaming a live video, and the replies to that video start to sound ominous...

ANTIGONE DAVIS: Maybe, like, please don't do this. We really care about you. There are different types of signals like that that will give us a strong sense that someone may be posting self-harm content.

KASTE: That's Antigone Davis, Facebook's global head of safety. When the software flags someone, she says Facebook staffers decide whether to call the local police. And AI comes into play there, too.

DAVIS: We also are able to use AI to coordinate a bunch of information on location to try to identify the location of that individual so that we can reach out to the right emergency response team.

KASTE: In this first year of the system's operation, that's happened now about 3,500 times, Facebook says. In other words, about 10 times a day, Facebook is calling police or first responders somewhere in the world to check on someone based on an initial alert produced by the monitoring software. This is a Facebook promotional video with testimonials from police in upstate New York talking about getting one of those alerts.

(SOUNDBITE OF ARCHIVED RECORDING)

JAMES GRICE: We did find her. She admitted to taking medication, and we were able to get her to a local hospital.

JOSEPH A. GERACE: There's no doubt in my mind that this saved her life.

KASTE: The new system has been welcomed by suicide prevention advocates, especially given the rising suicide numbers of recent years. But Mason Marks is more cautious.

MASON MARKS: I don't know if Facebook should be doing this.

KASTE: Marks studies the intersection between medicine, privacy and artificial intelligence. He says he gets why Facebook is doing this. The company has been under pressure, especially after some people used the livestream video to broadcast suicides and self-harm. But he wonders whether using an AI to flag cases for police attention is the right solution.

MARKS: It needs to be done very methodically, very cautiously, transparently and really looking at the evidence.

KASTE: Marks doesn't like the fact that Facebook is holding back some key details. For instance, how accurate is this? How many of those 3,500 calls actually turned out to be real emergencies? He says outsiders have to be able to evaluate this system and its potential side effects.

MARKS: People may also learn that if they do talk about suicide openly that they might fear a visit from police, so they might pull back and not engage in an open, honest dialogue. And I'm not sure that's a good thing.

KASTE: And this kind of AI-based monitoring may soon go beyond suicide prevention. Again, Facebook's Antigone Davis.

DAVIS: I think more and more we will see AI used in the context of safety and in the context of potentially preventing harm.

KASTE: For instance, using AI to detect inappropriate interactions online between adults and minors. She says that's also something Facebook is experimenting with. Law professor Ryan Calo says this would be the typical pattern for how a new monitoring technology would expand into law enforcement.

RYAN CALO: The way it would happen would be we would take something that everybody agrees is terrible. It would be something like suicide, which is epidemic, something like child pornography, something like terrorism - so these early things. And then, if they showed promise in those sectors, we broaden them to more and more things. And that - you know, that's a concern.

KASTE: Calo was co-director of the tech policy lab at the University of Washington, and he specializes in technology and privacy. He says we need to think about the possibility that this kind of AI will be used more broadly - say, to monitor social media chatter for signs of impending violence between people. Would that be desirable?

CALO: If you can truly get an up or down, yes or no, and that's reliable, if intervention is not likely to cause additional harm. And then this is something that we think is important enough to prevent that this is justified. And so that's a difficult calculus, and it's one that I think we're going to be making more and more.

KASTE: Especially if tech companies continue to show a willingness to call the police because of something an AI spotted online.

Martin Kaste, NPR News.

Copyright © 2018 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.