Scientists Tap Into Brain Signals To Synthesize Speech : Shots - Health News Scientists have found a way to transform electrical signals in the brain into intelligible speech. The advance may help people paralyzed by a stroke or disease, but the technology is experimental.
NPR logo

Decoded Brain Signals Could Give Voiceless People A Way To Talk

  • Download
  • <iframe src="https://www.npr.org/player/embed/716790281/716873147" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript
Decoded Brain Signals Could Give Voiceless People A Way To Talk

Decoded Brain Signals Could Give Voiceless People A Way To Talk

Decoded Brain Signals Could Give Voiceless People A Way To Talk

  • Download
  • <iframe src="https://www.npr.org/player/embed/716790281/716873147" width="100%" height="290" frameborder="0" scrolling="no" title="NPR embedded audio player">
  • Transcript

Scientists have found a way to transform brain signals into spoken words and sentences.

The approach could someday help people who have lost the ability to speak or gesture, a team from the University of California, San Francisco reported Wednesday in the journal Nature.

"Finding a way to restore speech is one of the great challenges in neurosciences," says Dr. Leigh Hochberg, a professor of engineering at Brown University who wasn't associated with the study. "This is a really exciting new contribution to the field."

Right now, people who are paralyzed and can't speak or gesture often rely on eye movements or a brain-controlled computer cursor to communicate. These methods allow them to spell out words one letter at a time.

But spelling out letters "is not the most efficient way to communicate," says Dr. Edward Chang, a neurosurgeon at UCSF and an author of the study. That approach allows a person to type fewer than 10 words a minute, compared with speaking about 150 words per minute with natural speech.

So Chang and a team of scientists have been looking for a way to let paralyzed patients produce entire words and sentences as if they were talking.

"The main goal that we had was really trying to figure out if we could actually decode brain activity into audible speech," Chang says.

The team studied five volunteers with severe epilepsy. As part of their treatment, these patients had electrodes temporarily placed on the surface of their brains.

The electrodes allowed doctors to locate brain areas causing seizures. And the electrodes also gave Chang's team a way to study the brain activity associated with speaking.

The volunteers read hundreds of sentences out loud while the scientists recorded signals from the brain's speech centers, which control muscles in the tongue, lips, jaw and larynx.

Next, a computer learned how to decode those signals and use them to synthesize speech.

Chang was "shocked" at how intelligible and natural the simulated speech was.

And a test on volunteers found that they could understand what the computer was saying most of the time.

The technology doesn't try to decode a person's thoughts. Instead it decodes the brain signals produced when a person actually tries to speak.

A similar approach has allowed people who are paralyzed to control a robotic arm by pretending they are moving their own arm.

"Instead of moving a robotic arm, this is really more focused on thinking about how to control a robotic vocal tract," Chang says.

Chang hopes to try the approach soon in patients who have lost the ability to speak. The National Institutes of Health's BRAIN Initiative was the primary funder for the research.

In the meantime, the study represents a major advance in turning brain activity into speech, scientists say.

The experiments provide "compelling proof-of-concept demonstrations," wrote Chethan Pandarinath and Yahia H. Ali of Emory University and Georgia Tech in a commentary accompanying the study.

Previous efforts to transform brain signals into speech have been less intelligible and required much more computing power, the commentary said.

Hochberg, who also has affiliations with Massachusetts General Hospital and the Providence VA Medical Center, was especially impressed by the quality of recordings of synthesized speech from Chang's lab.

"I pressed play, I listened to it with my eyes closed and what I heard was something that was recognizable as speech," he says.

As a doctor, Hochberg often encounters patients who would benefit from a device based on this technology.

"I see people who may yesterday have been walking and talking and today as a result of a brain stem stroke are suddenly unable to move and unable to speak," he says.

People with ALS or Lou Gehrig's disease also lose those abilities, he says. And some people born with severe cerebral palsy have great difficulty speaking.

The study adds to the evidence that restoring fluent speech to these people will be possible someday.

"We're not there yet," Hochberg says. "There's still a lot of research, and clinical research in particular, that needs to happen.

Even so, the field is advancing with remarkable speed, Hochberg says.

Just a few years ago, he says, scientists expected it would take decades to turn brain signals into intelligible speech. "Now that interval can be measured in years," he says.

Correction April 25, 2019

In a previous Web version of this story, Chethan Pandarinath's surname was misspelled as Pandarinth and Yahia H. Ali's first name was misspelled as Yahio. Additionally, we said Pandarinath was affiliated with Georgia Tech and Ali with Emory. In fact, they each are affiliated with both institutions.