Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Auditory Processing of Speech and Nonspeech in People Who Stutter.
Journal of Speech, Language, and Hearing Research ( IF 2.2 ) Pub Date : 2024-07-26 , DOI: 10.1044/2024_jslhr-24-00107
Matthew C Phillips 1 , Emily B Myers 1, 2
Affiliation  

PURPOSE We investigated speech and nonspeech auditory processing of temporal and spectral cues in people who do and do not stutter. We also asked whether self-reported stuttering severity was predicted by performance on the auditory processing measures. METHOD People who stutter (n = 23) and people who do not stutter (n = 28) completed a series of four auditory processing tasks online. These tasks consisted of speech and nonspeech stimuli differing in spectral or temporal cues. We then used independent-samples t-tests to assess differences in phonetic categorization slopes between groups and linear mixed-effects models to test differences in nonspeech auditory processing between stuttering and nonstuttering groups, and stuttering severity as a function of performance on all auditory processing tasks. RESULTS We found statistically significant differences between people who do and do not stutter in phonetic categorization of a continuum differing in a temporal cue and in discrimination of nonspeech stimuli differing in a spectral cue. A significant proportion of variance in self-reported stuttering severity was predicted by performance on the auditory processing measures. CONCLUSIONS Taken together, these results suggest that people who stutter process both speech and nonspeech auditory information differently than people who do not stutter and may point to subtle differences in auditory processing that could contribute to stuttering. We also note that these patterns could be the consequence of listening to one's own speech, rather than the cause of production differences.

中文翻译:


口吃者言语和非言语的听觉处理。



目的我们研究了口吃和不口吃的人对时间和频谱线索的言语和非言语听觉处理。我们还询问自我报告的口吃严重程度是否可以通过听觉处理测量的表现来预测。方法 口吃者 (n = 23) 和不口吃者 (n = 28) 在线完成一系列四项听觉处理任务。这些任务由频谱或时间线索不同的语音和非语音刺激组成。然后,我们使用独立样本 t 检验来评估组之间语音分类斜率的差异,并使用线性混合效应模型来测试口吃组和非口吃组之间非语音听觉处理的差异,以及口吃严重程度作为所有听觉处理任务表现的函数。结果我们发现口吃者和不口吃者之间在时间线索不同的连续体的语音分类以及频谱线索不同的非言语刺激的辨别方面存在统计学上的显着差异。自我报告的口吃严重程度的很大一部分差异是通过听觉处理测量的表现来预测的。结论 总而言之,这些结果表明,口吃的人处理言语和非言语听觉信息的方式与不口吃的人不同,并且可能指出可能导致口吃的听觉处理的细微差异。我们还注意到,这些模式可能是听自己讲话的结果,而不是生产差异的原因。
更新日期:2024-07-26
down
wechat
bug