Categories: world

Researchers draw speech directly from the brain – TechCrunch

In a performance that eventually unlocks the possibility of speech for people with serious medical conditions, scientists have successfully recreated speech for healthy subjects by tapping directly into their brains. The technique is a long, long way from practical application, but science is real and the promise is there. Edward Chang, neurosurgeon at UC San Francisco and co-author of the paper published today in Nature, explained the effects of the team's work in a press release: "For the first time, this study shows that we can generate whole spoken sentences based on a person's brain activity. This is an exciting principle of principle that with technology that is already within reach we should be able to build a device that is clinically viable in patients with speech loss. " To be clear, this is not a magic machine that you are sitting in and It translates your thoughts into numbers. It is a complex and invasive process that does not exactly decode what the subject thinks but what they were actually speaks . Led by speech scientist Gopala Anumanchipalli, subjects who had had already large electrode plates implanted in the brain for another medical procedure. The researchers had read these happy people several hundred sentences high while the recorded signals that the electrodes discovered . Look, it happens that the researchers know a certain pattern of brain activity that comes after you think about and arrange words (in cortical areas like Wernickes and Brocas) and before the last signals are…

In a performance that eventually unlocks the possibility of speech for people with serious medical conditions, scientists have successfully recreated speech for healthy subjects by tapping directly into their brains. The technique is a long, long way from practical application, but science is real and the promise is there.

Edward Chang, neurosurgeon at UC San Francisco and co-author of the paper published today in Nature, explained the effects of the team’s work in a press release: “For the first time, this study shows that we can generate whole spoken sentences based on a person’s brain activity. This is an exciting principle of principle that with technology that is already within reach we should be able to build a device that is clinically viable in patients with speech loss. “

To be clear, this is not a magic machine that you are sitting in and It translates your thoughts into numbers. It is a complex and invasive process that does not exactly decode what the subject thinks but what they were actually speaks .

Led by speech scientist Gopala Anumanchipalli, subjects who had had already large electrode plates implanted in the brain for another medical procedure. The researchers had read these happy people several hundred sentences high while the recorded signals that the electrodes discovered

.

Look, it happens that the researchers know a certain pattern of brain activity that comes after you think about and arrange words (in cortical areas like Wernickes and Brocas) and before the last signals are sent from the motor cortex to your tongue and mouth muscles. There is a kind of intermediate signal between those who Anumanchipalli and his co-authors, graduate student Josh Chartier, who were previously characterized, and who they thought might work to reconstruct speech.

Analyzing the sound directly let the team decide which muscles and movements would be involved when (this is pretty established science), and from that they built a kind of virtual model of the person’s vocal system.

They then mapped the brain activity detected during the session to the virtual model using a machine learning system which essentially allows a recording of a brain to control a recording of a mouth. It is important to understand that this does not turn abstract thoughts into words – it understands the concrete instructions of the brain to the facial muscles and determines from what words these movements would form. It is brain reading, but it is not reading.

The resulting synthetic speech, but not exactly crystal clear, is certainly understandable. And the right way can be to print 150 words per minute from a person who can’t otherwise speak.

“We still have ways to go perfectly after mimicking spoken language,” Chartier said. “Still, the level of accuracy we produced here is a fantastic improvement in real-time communication compared to what is currently available.”

For comparison, a person who is sown, for example, with a degenerative muscle disease, often has to speak by spelling out a letter at a time with the gaze. Picture 5-10 words per minute, with other methods for disabled people who go even slower. It is a miracle in a way that they can communicate at all, but the time consuming and less natural method is far from the speed and expression of the real speech.

If a person could use this method would be far closer to regular speech, but perhaps to the cost of perfect accuracy. But it’s not a magic bullet.

The problem with this method is that it requires a lot of carefully collected data from what constitutes a healthy speech system, from brain to tip of the tongue. For many people it is no longer possible to collect these data, and for others, the invasive collection method may make it impossible for a doctor to recommend. And conditions that prevented a person from ever talking prevent this method from working as well.

The good news is that it is a beginning, and there are many conditions that it would work for, theoretically. And the collection of the critical brain and speech recording data can be prevented in cases where a stroke or degeneration is considered a risk.

Share
Published by
Faela