The results are not error free. Although the system captures the clear sound of someone’s voice and it is often easy to understand, there are times when the synthesizer produces distorted words. It is still miles better than previous approaches that did not attempt to replicate the meeting. Researchers also test denser electrodes on the brain interface and more sophisticated machine learning, both of which can improve overall accuracy. This would preferably work with someone, even though they cannot train the system before they are used in practice.
That effort can take a while, and there is no fixed roadmap at this stage. The goal is at least clear: the researchers want to revive the voices of people with ALS, Parkinson’s and other conditions where speech loss is normally irreversible. If that happens, it can dramatically improve communication for those patients (who may need to use much slower methods today) and help them feel more connected to the community.