当前位置: X-MOL 学术International Studies Quarterly › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Bending the Automation Bias Curve: A Study of Human and AI-Based Decision Making in National Security Contexts
International Studies Quarterly ( IF 2.799 ) Pub Date : 2024-04-01 , DOI: 10.1093/isq/sqae020
Michael C Horowitz 1 , Lauren Kahn 2
Affiliation  

Uses of artificial intelligence (AI) are growing around the world. What will influence AI adoption in the international security realm? Research on automation bias suggests that humans can often be overconfident in AI, whereas research on algorithm aversion shows that, as the stakes of a decision rise, humans become more cautious about trusting algorithms. We theorize about the relationship between background knowledge about AI, trust in AI, and how these interact with other factors to influence the probability of automation bias in the international security context. We test these in a preregistered task identification experiment across a representative sample of 9,000 adults in nine countries with varying levels of AI industries. The results strongly support the theory, especially concerning AI background knowledge. A version of the Dunning–Kruger effect appears to be at play, whereby those with the lowest level of experience with AI are slightly more likely to be algorithm-averse, then automation bias occurs at lower levels of knowledge before leveling off as a respondent’s AI background reaches the highest levels. Additional results show effects from the task’s difficulty, overall AI trust, and whether a human or AI decision aid is described as highly competent or less competent.

中文翻译:

弯曲自动化偏差曲线:国家安全背景下基于人类和人工智能的决策研究

人工智能 (AI) 的应用在世界范围内不断增长。什么会影响人工智能在国际安全领域的采用?对自动化偏见的研究表明,人类往往对人工智能过于自信,而对算法厌恶的研究表明,随着决策风险的增加,人类对信任算法变得更加谨慎。我们对人工智能的背景知识、对人工智能的信任之间的关系以及这些因素如何与其他因素相互作用以影响国际安全背景下自动化偏见的概率进行了理论分析。我们通过预先注册的任务识别实验对这些内容进行了测试,该实验的样本来自九个人工智能行业水平不同的国家,其中包括 9,000 名成年人。结果有力地支持了该理论,尤其是有关人工智能背景知识的理论。邓宁-克鲁格效应的一个版本似乎正在发挥作用,即那些人工智能经验水平最低的人更有可能厌恶算法,然后在知识水平较低的情况下出现自动化偏差,然后与受访者的人工智能水平趋于一致背景达到最高水平。其他结果显示了任务难度、人工智能整体信任度以及人类或人工智能决策辅助是否被描述为高能力或低能力的影响。
更新日期:2024-04-01
down
wechat
bug