当前位置:
X-MOL 学术
›
Current Opinion in Psychology
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Human-algorithm interactions help explain the spread of misinformation
Current Opinion in Psychology ( IF 6.3 ) Pub Date : 2023-11-23 , DOI: 10.1016/j.copsyc.2023.101770 Killian L McLoughlin 1 , William J Brady 2
Current Opinion in Psychology ( IF 6.3 ) Pub Date : 2023-11-23 , DOI: 10.1016/j.copsyc.2023.101770 Killian L McLoughlin 1 , William J Brady 2
Affiliation
Human attention biases toward moral and emotional information are as prevalent online as they are offline. When these biases interact with content algorithms that curate social media users’ news feeds to maximize attentional capture, moral and emotional information are privileged in the online information ecosystem. We review evidence for these human-algorithm interactions and argue that misinformation exploits this process to spread online. This framework suggests that interventions aimed at combating misinformation require a dual-pronged approach that combines person-centered and design-centered interventions to be most effective. We suggest several avenues for research in the psychological study of misinformation sharing under a framework of human-algorithm interaction.
中文翻译:
人机交互有助于解释错误信息的传播
人类对道德和情感信息的注意力偏差在线上和线下一样普遍。当这些偏见与内容算法相互作用时,内容算法会管理社交媒体用户的新闻提要,以最大限度地吸引注意力,道德和情感信息就会在在线信息生态系统中享有特权。我们回顾了这些人类与算法交互的证据,并认为错误信息利用这一过程在网上传播。该框架表明,旨在打击错误信息的干预措施需要采取双管齐下的方法,将以人为中心和以设计为中心的干预措施结合起来,才能最有效。我们建议在人机交互框架下对错误信息共享进行心理学研究的几种途径。
更新日期:2023-11-23
中文翻译:
人机交互有助于解释错误信息的传播
人类对道德和情感信息的注意力偏差在线上和线下一样普遍。当这些偏见与内容算法相互作用时,内容算法会管理社交媒体用户的新闻提要,以最大限度地吸引注意力,道德和情感信息就会在在线信息生态系统中享有特权。我们回顾了这些人类与算法交互的证据,并认为错误信息利用这一过程在网上传播。该框架表明,旨在打击错误信息的干预措施需要采取双管齐下的方法,将以人为中心和以设计为中心的干预措施结合起来,才能最有效。我们建议在人机交互框架下对错误信息共享进行心理学研究的几种途径。