New brain implant shows promise to translate speech from thoughts

New brain implant shows promise to translate speech from thoughts

New York, Nov 7:  A team of scientists have developed a novel prosthetic that has the potential to decode signals from the brain's speech centre to predict what a person is trying to say.

The new technology, detailed in the journal Nature Communications, might one day help people unable to talk due to neurological disorders regain the ability to communicate through a brain-computer interface.

"There are many patients who suffer from debilitating motor disorders, like ALS (amyotrophic lateral sclerosis) or locked-in syndrome, that can impair their ability to speak," said lead researchers Gregory Cogan, Professor of neurology at Duke University's School of Medicine.

"But the current tools available to allow them to communicate are generally very slow and cumbersome," Cogan said.

The new device, a postage stamp-sized piece of flexible, medical-grade plastic, was inbuilt with an impressive 256 microscopic brain sensors.

Neurons just a grain of sand apart can have wildly different activity patterns when coordinating speech, so it's necessary to distinguish signals from neighbouring brain cells to help make accurate predictions about intended speech.

The experiment required the researchers to place the device temporarily in four patients who were undergoing brain surgery for some other condition, such as treating Parkinson's disease or having a tumour removed.

The task was a simple listen-and-repeat activity. Participants heard a series of nonsense words, like "ava," "kug," or "vip," and then spoke each one aloud. The device recorded activity from each patient's speech motor cortex as it coordinated nearly 100 muscles that move the lips, tongue, jaw, and larynx.

Afterwards, the researchers took the neural and speech data from the surgery suite and fed it into a machine learning algorithm to see how accurately it could predict what sound was being made, based only on the brain activity recordings.

For some sounds and participants, like /g/ in the word "gak," the decoder got it right 84 per cent of the time when it was the first sound in a string of three that made up a given nonsense word.

Accuracy dropped, though, as the decoder parsed out sounds in the middle or at the end of a nonsense word. It also struggled if two sounds were similar, like /p/ and /b/.

Overall, the decoder was accurate 40 per cent of the time. That may seem like a humble test score, but it was quite impressive given that similar brain-to-speech technical feats require hours or days-worth of data to draw from.

The speech decoding algorithm, however, was working with only 90 seconds of spoken data from the 15-minute test.

While the work is encouraging, there's still a long way to go for the speech prosthetic to hit the shelves anytime soon, the team said.

(The content of this article is sourced from a news agency and has not been edited by the ap7am team.)

More News