Following this, Suseendrakumar Duraivel, a graduate student at the biomedical engineering lab and the first author of the study, fed brain signals from each patient into a machine learning algorithm and compared the sound predictions it made with actual speech data.
Duraivel found that the algorithm predicted the first sound of the three-alphabet words (eg., /g/ in ‘gak’) with 84 per cent accuracy, but was less accurate in predicting the second and third sounds (eg., /g/ in kug). It also struggled to differentiate between similar sounds (eg., /p/ and /b/). The algorithm had an overall accuracy of 40 per cent—a remarkable technical feat considering that this was achieved with just 90 seconds of spoken data from the 15-minute test, compared to standard tools that require hours or days of data to achieve similar values.
The promising findings have attracted US$2.4 million in funding from the National Institutes of Health, and the research team plans to use this money to improve the implants into cordless devices.
“We’re now developing the same kind of recording devices, but without any wires,” enthused Cogan. “You’d be able to move around, and you wouldn’t have to be tied to an electrical outlet, which is really exciting.” Along with this, bringing the decoding algorithms up to speed will be the primary focus for the research team before they can chart out a ‘bench-to-bedside’ translation pathway for the implants.
Adapted by Sruthi Jagannathan from Duke Scientists Create Brain Implant That May Enable Communication From Thoughts Alone.