当前位置: X-MOL 学术Trends Cogn. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Large language models (LLMs) and the institutionalization of misinformation
Trends in Cognitive Sciences ( IF 16.7 ) Pub Date : 2024-10-10 , DOI: 10.1016/j.tics.2024.08.007
Maryanne Garry, Way Ming Chan, Jeffrey Foster, Linda A. Henkel

Large language models (LLMs), such as ChatGPT, flood the Internet with true and false information, crafted and delivered with techniques that psychological science suggests will encourage people to think that information is true. What’s more, as people feed this misinformation back into the Internet, emerging LLMs will adopt it and feed it back in other models. Such a scenario means we could lose access to information that helps us tell what is real from unreal – to do ‘reality monitoring.’ If that happens, misinformation will be the new foundation we use to plan, to make decisions, and to vote. We will lose trust in our institutions and each other.

中文翻译:


大型语言模型 (LLMs) 和错误信息的制度化



大型语言模型(LLMs),例如 ChatPT,在互联网上充斥着真假信息,这些信息采用心理科学表明将鼓励人们认为信息是真实的技术精心制作和传递的。更重要的是,随着人们将这些错误信息反馈到互联网上,新兴的 LLMs 将采用它并将其反馈到其他模型中。这种情况意味着我们可能无法获得帮助我们区分真实和虚幻的信息——进行“现实监控”。如果发生这种情况,错误信息将成为我们用来规划、决策和投票的新基础。我们将失去对我们的机构和彼此的信任。
更新日期:2024-10-10
down
wechat
bug