当前位置: X-MOL 学术Nat. Hum. Behav. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Representation of internal speech by single neurons in human supramarginal gyrus
Nature Human Behaviour ( IF 29.9 ) Pub Date : 2024-05-13 , DOI: 10.1038/s41562-024-01867-y
Sarah K. Wandelt , David A. Bjånes , Kelsie Pejsa , Brian Lee , Charles Liu , Richard A. Andersen

Speech brain–machine interfaces (BMIs) translate brain signals into words or audio outputs, enabling communication for people having lost their speech abilities due to diseases or injury. While important advances in vocalized, attempted and mimed speech decoding have been achieved, results for internal speech decoding are sparse and have yet to achieve high functionality. Notably, it is still unclear from which brain areas internal speech can be decoded. Here two participants with tetraplegia with implanted microelectrode arrays located in the supramarginal gyrus (SMG) and primary somatosensory cortex (S1) performed internal and vocalized speech of six words and two pseudowords. In both participants, we found significant neural representation of internal and vocalized speech, at the single neuron and population level in the SMG. From recorded population activity in the SMG, the internally spoken and vocalized words were significantly decodable. In an offline analysis, we achieved average decoding accuracies of 55% and 24% for each participant, respectively (chance level 12.5%), and during an online internal speech BMI task, we averaged 79% and 23% accuracy, respectively. Evidence of shared neural representations between internal speech, word reading and vocalized speech processes was found in participant 1. SMG represented words as well as pseudowords, providing evidence for phonetic encoding. Furthermore, our decoder achieved high classification with multiple internal speech strategies (auditory imagination/visual imagination). Activity in S1 was modulated by vocalized but not internal speech in both participants, suggesting no articulator movements of the vocal tract occurred during internal speech production. This work represents a proof-of-concept for a high-performance internal speech BMI.



中文翻译:

人类边缘上回单个神经元的内部语音表征

语音脑机接口 (BMI) 将大脑信号转化为文字或音频输出,使因疾病或受伤而丧失言语能力的人们能够进行交流。虽然在发声、尝试和模仿语音解码方面已经取得了重要进展,但内部语音解码的结果很少,并且尚未实现高功能。值得注意的是,目前还不清楚内部语音可以从哪些大脑区域解码。两名四肢瘫痪的参与者在边缘上回(SMG)和初级体感皮层(S1)植入了微电极阵列,进行了六个单词和两个伪单词的内部和发声言语。在这两名参与者中,我们在 SMG 的单个神经元和群体水平上发现了内部和发声语音的显着神经表征。从 SMG 记录的人口活动来看,内部所说和发声的单词是可以明显解码的。在离线分析中,我们为每个参与者分别实现了 55% 和 24% 的平均解码准确率(机会水平 12.5%),在在线内部语音 BMI 任务中,我们的平均准确率分别为 79% 和 23%。在参与者 1 中发现了内部语音、单词阅读和发声语音过程之间共享神经表征的证据。SMG 代表单词和伪单词,为语音编码提供了证据。此外,我们的解码器通过多种内部语音策略(听觉想象/视觉想象)实现了高分类。两名参与者的 S1 活动均受发声调节,但不受内部言语调节,表明在内部言语产生过程中没有发生声道的发音器官运动。这项工作代表了高性能内部语音 BMI 的概念验证。

更新日期:2024-05-13
down
wechat
bug