Home AI News Researchers Develop Revolutionary Brain-Computer Interface Allowing Paralyzed Woman to Speak

Researchers Develop Revolutionary Brain-Computer Interface Allowing Paralyzed Woman to Speak

0
Researchers Develop Revolutionary Brain-Computer Interface Allowing Paralyzed Woman to Speak

Breakthrough in Brain-Computer Interface (BCI) Enables Speech for Paralyzed Woman

Researchers from UC San Francisco and UC Berkeley have made a significant breakthrough in the development of a brain-computer interface (BCI) that allows a woman with severe paralysis to communicate through a digital avatar. This pioneering technology, featured in Nature on August 23, 2023, is the first to synthesize speech and facial expressions directly from brain signals. It also surpasses existing commercially available systems by decoding these signals into text at an impressive rate of 80 words per minute.

Advancements Toward Restoring Natural Communication

Dr. Edward Chang, chair of neurological surgery at UCSF and a leading expert in BCI, aims to obtain FDA approval for a speech-enabled BCI system in the near future. He envisions a future where patients can communicate in a fully embodied manner, which closely mimics natural conversation. This recent breakthrough brings us a step closer to achieving this goal.

Understanding the Brain-Computer Interface

In a previous study, Chang and his team successfully decoded brain signals into text for a man who had previously suffered a brainstem stroke. However, this new research goes even further by decoding brain signals into both speech and the accompanying movements of the face during conversation.

To achieve this, a thin rectangle containing 253 electrodes was implanted on the woman’s brain surface, specifically on areas critical to speech. These electrodes intercepted the brain signals that would normally control the muscles associated with speech and facial expressions. Connected to a bank of computers via a cable, this system allowed the researchers to capture and analyze these signals.

The participant then underwent intensive training with the research team to teach the artificial intelligence algorithms how to recognize her unique brain signals for speech. This involved repetitively speaking different phrases from a conversational vocabulary consisting of 1,024 words. By recognizing the specific brain activity patterns associated with these sounds, the computer could accurately decode the signals.

Decoding Words from Sub-Units of Speech

Unlike previous approaches that focused on recognizing whole words, the researchers developed a system that decoded words from phonemes. A phoneme is a sub-unit of speech, similar to how letters form words in written language. By training the computer to recognize 39 phonemes, regardless of the word, the system achieved higher accuracy and three-times faster decoding speed.

Sean Metzger and Alex Silva, graduate students in the joint Bioengineering Program at UC Berkeley and UCSF, worked on developing the text decoder. They highlight the importance of accuracy, speed, and vocabulary for enabling fast and natural conversations using this technology.

Synthesizing Speech and Facial Animation

To create the digital avatar’s voice, the team implemented an algorithm that personalized the synthesized speech to resemble the woman’s voice before her injury. They used a recording of her speaking during her wedding to achieve this customization. In collaboration with Speech Graphics, an AI-driven facial animation company, the researchers simulated muscle movements of the face using custom machine learning processes. This allowed the avatar’s face to mimic speech and express emotions such as happiness, sadness, and surprise.

“We’re compensating for the disrupted connection between the brain and vocal tract caused by the stroke,” said Kaylo Littlejohn, a graduate student involved in the project. “When the subject first used this system to speak and move the avatar’s face simultaneously, I knew it would have a profound impact.”

Towards a Wireless Solution

The team’s next objective is to create a wireless version of the BCI, eliminating the need for physical connections. This advancement would grant individuals the freedom to independently control their computers and phones, transforming their level of independence and enhancing social interactions.

“Our ultimate goal is to provide people with the ability to control their devices using this technology, which would have a transformative effect on their daily lives,” said co-first author Dr. David Moses, an adjunct professor in neurological surgery.

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here