Breakthrough tech translates thoughts to speech—and docs say it could be game-changing for neurodegenerative disease

By Alpana Mohta, MD, DNB, FEADV, FIADVL, IFAADFact-checked by Barbara BekieszPublished September 12, 2025


Industry Buzz

For people with severe speech and motor impairments, BCIs capable of decoding inner speech could help them communicate much more easily and more naturally.

—Erin Kunz, PhD, postdoctoral researcher at Stanford University

For patients facing paralysis, the opportunity to reclaim independence gives these procedures profound value despite the risks.

—Scott Meek, PhD, head of R&D at Subsense Inc.

Brain–computer interfaces (BCIs) have moved from decoding attempted speech, where patients try to articulate, to decoding inner speech, rendering imagined words into real-time communication.

This innovation offers hope for patients with amyotrophic lateral sclerosis (ALS) or brainstem stroke, who retain cognitive-linguistic planning but lose the ability to speak. [][]

Stanford-based researchers studying the neural representation of inner speech recently reported their findings in Cell. []

Study overview 

The study involved four participants (three with ALS, one with brainstem stroke), all with pre-implanted microelectrode arrays in the motor cortex. []

As the participants took part in tasks involving attempted or inner speech, machine learning models processed their neural activity to decode phoneme sequences into sentences in real time.

The researchers found that attempted and inner speech activated overlapping cortical regions, though inner speech produced weaker signals, demanding sensitive processing to distinguish and decode them.

The BCI decoded imagined sentences from a vocabulary of up to 125,000 words with accuracy as high as 74%. It even captured unprompted thoughts, such as counting, underscoring both its sensitivity and the potential risk of unintended decoding. []

Related: Gene therapy breakthrough in ALS and frontotemporal dementia

Privacy safeguards and clinical implications

To prevent accidental decoding of private, non-intended thought, researchers built in a passphrase system. []

In this way, decoding only begins when a user internally repeats a keyword (e.g., “Chitty Chitty Bang Bang”). For one participant in the study, the keyword was recognized with 98.75% reliability.

“For people with severe speech and motor impairments, BCIs capable of decoding inner speech could help them communicate much more easily and more naturally," said Erin Kunz, PhD, postdoctoral researcher at Stanford and lead author of the Cell research.

Frank Willett, PhD, assistant professor of neurosurgery at Stanford, noted the potential for more fluent, natural conversation. As they improve, “future systems could restore fluent, rapid and comfortable speech via inner speech alone,” Dr. Willett said. []

Scott Meek, PhD, head of R&D at Subsense Inc., a materials scientist and neurotechnology researcher with over 20 years of medical device innovation experience, points out some potential hazards.

“Implanting electrodes in the motor cortex exposes patients to the usual risks of brain surgery, such as bleeding, infection, and unintended tissue damage. But for patients facing paralysis, the opportunity to reclaim independence gives these procedures profound value despite the risks," he tells MDLinx.

He continues: “Electrode movement is clinically significant because even minor shifts can degrade signal quality and reduce benefit. Non-surgical strategies, where there’s no rigid implant in the brain at all, may sidestep this issue entirely.”

These systems address the disabling features of ALS, where planning remains intact but motor speech execution fails.

Systems capable of decoding inner speech avoid the fatigue associated with attempted-speech BCIs and their limitations.

Related: 'Grey's Anatomy' star Eric Dane's neuro battle: The shocking speed of Dr. McSteamy's health decline

Technical constraints

Although 74% accuracy is promising, errors remain high, especially for spontaneous inner speech. Current systems rely on rigid, wired arrays that are neither fully implantable nor scalable.

Work is underway on wireless, high-channel devices to improve fidelity and patient comfort.

Parallel research is investigating non-invasive EEG frameworks combined with large language models (LLMs) for post-stroke aphasia and ALS rehabilitation.

These hybrid BCI-LLM systems adapt language delivery and reduce cognitive fatigue, though they still lack precision and immediacy for fluent speech. []

Sustaining accuracy over time is another hurdle. Prior intracortical BCI work showed that self-recalibration using language model–generated pseudo-labels can maintain over 93% accuracy across 400+ days in handwriting BCIs. []

Similar strategies may be essential before speech systems move from research trials to clinical reality.

Read Next: New neuro tech lets patients type, talk, and game with only their thoughts—early trial results and ethical questions

SHARE THIS ARTICLE

ADVERTISEMENT