Share

AI system restores speech for paralyzed patients using own voice


Researchers in California have achieved a significant breakthrough with an AI-powered system that restores natural speech to paralyzed individuals in real time, using their own voices, specifically demonstrated in a clinical trial participant who is severely paralyzed and cannot speak. 

This innovative technology, developed by teams at UC Berkeley and UC San Francisco, combines brain-computer interfaces (BCI) with advanced artificial intelligence to decode neural activity into audible speech. 

Compared to other recent attempts to create speech from brain signals, this new system is a major advancement.

STAY PROTECTED & INFORMED! GET SECURITY ALERTS & EXPERT TECH TIPS – SIGN UP FOR KURT’S ‘THE CYBERGUY REPORT’ NOW

AI-powered system (Kaylo Littlejohn, Cheol Jun Cho, et al. Nature Neuroscience 2025)

How it works

The system uses devices such as high-density electrode arrays that record neural activity directly from the brain’s surface. It also works with microelectrodes that penetrate the brain’s surface and non-invasive surface electromyography sensors placed on the face to measure muscle activity. These devices tap into the brain to measure neural activity, which the AI then learns to transform into the sounds of the patient’s voice. 

The neuroprosthesis samples neural data from the brain’s motor cortex, the area controlling speech production, and AI decodes that data into speech. According to study co-lead author Cheol Jun Cho, the neuroprosthesis intercepts signals where the thought is translated into articulation and, in the middle of that, motor control.

AI patient voice 2

AI-powered system (Kaylo Littlejohn, Cheol Jun Cho, et al. Nature Neuroscience 2025)

AI ENABLES PARALYZED MAN TO CONTROL ROBOTIC ARM WITH BRAIN SIGNALS

Key advancements

  • Real-time speech synthesis: The AI-based model streams intelligible speech from the brain in near-real time, addressing the challenge of latency in speech neuroprostheses. This “streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” according to Gopala Anumanchipalli, co-principal investigator of the study. The model decodes neural data in 80-ms increments, enabling uninterrupted use of the decoder, further increasing speed.
  • Naturalistic speech: The technology aims to restore naturalistic speech, allowing for more fluent and expressive communication.
  • Personalized voice: The AI is trained using the patient’s own voice before their injury, generating audio that sounds like them. In cases where patients have no residual vocalization, the researchers utilize a pre-trained text-to-speech model and the patient’s pre-injury voice to fill in the missing details.
  • Speed and accuracy: The system can begin decoding brain signals and outputting speech within a second of the patient attempting to speak, a significant improvement from the eight-second delay in a previous study from 2023.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

ai patient voice 3

AI-powered system (Kaylo Littlejohn, Cheol Jun Cho, et al. Nature Neuroscience 2025)

EXOSKELETON HELPS PARALYZED PEOPLE REGAIN INDEPENDENCE

Overcoming challenges

One of the key challenges was mapping neural data to speech output when the patient had no residual vocalization. The researchers overcame this by using a pre-trained text-to-speech model and the patient’s pre-injury voice to fill in the missing details.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

AI patient voice 4

AI-powered system (Kaylo Littlejohn, Cheol Jun Cho, et al. Nature Neuroscience 2025)

HOW ELON MUSK’S NEURALINK BRAIN CHIP WORKS

Impact and future directions

This technology has the potential to significantly improve the quality of life for people with paralysis and conditions like ALS. It allows them to communicate their needs, express complex thoughts and connect with loved ones more naturally.

“It is exciting that the latest AI advances are greatly accelerating BCIs for practical real-world use in the near future,” UCSF neurosurgeon Edward Chang said.

The next steps include speeding up the AI’s processing, making the output voice more expressive and exploring ways to incorporate tone, pitch and loudness variations into the synthesized speech. Researchers also aim to decode paralinguistic features from brain activity to reflect changes in tone, pitch and loudness.

SUBSCRIBE TO KURT’S YOUTUBE CHANNEL FOR QUICK VIDEO TIPS ON HOW TO WORK ALL OF YOUR TECH DEVICES

Kurt’s key takeaways

What’s truly amazing about this AI is that it doesn’t just translate brain signals into any kind of speech. It’s aiming for natural speech, using the patient’s own voice. It’s like giving them their voice back, which is a game changer. It gives new hope for effective communication and renewed connections for many individuals.

What role do you think government and regulatory bodies should play in overseeing the development and use of brain-computer interfaces? Let us know by writing us at Cyberguy.com/Contact.

CLICK HERE TO GET THE FOX NEWS APP

For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.

Ask Kurt a question or let us know what stories you’d like us to cover.

Follow Kurt on his social channels:

Answers to the most-asked CyberGuy questions:

New from Kurt:

Copyright 2025 CyberGuy.com. All rights reserved.



Source link