当前位置:
X-MOL 学术
›
MIS Quarterly
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Prejudiced against the Machine? Implicit Associations and the Transience of Algorithm Aversion
MIS Quarterly ( IF 7.0 ) Pub Date : 2023-12-01 , DOI: 10.25300/misq/2022/17961 Ofir Turel , , Shivam Kalhan ,
MIS Quarterly ( IF 7.0 ) Pub Date : 2023-12-01 , DOI: 10.25300/misq/2022/17961 Ofir Turel , , Shivam Kalhan ,
Algorithm aversion is an important and persistent issue that prevents harvesting the benefits of advancements in artificial intelligence. The literature thus far has provided explanations that primarily focus on conscious reflective processes. Here, we supplement this view by taking an unconscious perspective that can be highly informative. Building on theories of implicit prejudice, in a preregistered study, we suggest that people develop an implicit bias (i.e., prejudice) against artificial intelligence (AI) systems, as a different and threatening “species,” the behavior of which is unknown. Like in other contexts of prejudice, we expected people to be guided by this implicit bias but try to override it. This leads to some willingness to rely on algorithmic advice (appreciation), which is reduced as a function of people’s implicit prejudice against the machine. Next, building on the somatic marker hypothesis and the accessibility-diagnosticity perspective, we provide an explanation as to why aversion is ephemeral. As people learn about the performance of an algorithm, they depend less on primal implicit biases when deciding whether to rely on the AI’s advice. Two studies (n1 = 675, n2 = 317) that use the implicit association test consistently support this view. Two additional studies (n3 = 255, n4 = 332) rule out alternative explanations and provide stronger support for our assertions. The findings ultimately suggest that moving the needle between aversion and appreciation depends initially on one’s general unconscious bias against AI because there is insufficient information to override it. They further suggest that in later use stages, this shift depends on accessibility to diagnostic information about the AI’s performance, which reduces the weight given to unconscious prejudice.
中文翻译:
对机器有偏见吗?隐式关联和算法厌恶的短暂性
算法厌恶是一个重要且持续存在的问题,它阻碍了我们从人工智能进步中获益。迄今为止的文献提供的解释主要集中在有意识的反思过程。在这里,我们通过一种可以提供大量信息的无意识视角来补充这一观点。基于隐性偏见理论,在一项预先注册的研究中,我们建议人们对人工智能(AI)系统产生隐性偏见(即偏见),将其视为一种不同的、具有威胁性的“物种”,其行为未知。就像在其他偏见的情况下一样,我们期望人们受到这种隐性偏见的引导,但试图克服它。这导致人们愿意依赖算法建议(欣赏),而这种建议会随着人们对机器的隐性偏见而减少。接下来,基于躯体标记假设和可及性诊断视角,我们解释了为什么厌恶是短暂的。当人们了解算法的性能时,在决定是否依赖人工智能的建议时,他们对原始隐性偏见的依赖就会减少。使用隐性关联测试的两项研究(n1 = 675,n2 = 317)一致支持这一观点。另外两项研究(n3 = 255,n4 = 332)排除了其他解释,并为我们的主张提供了更有力的支持。研究结果最终表明,厌恶和欣赏之间的转变最初取决于人们对人工智能普遍无意识的偏见,因为没有足够的信息来克服它。他们进一步表明,在后期的使用阶段,这种转变取决于对人工智能性能诊断信息的可访问性,这减少了无意识偏见的影响。
更新日期:2023-11-30
中文翻译:
对机器有偏见吗?隐式关联和算法厌恶的短暂性
算法厌恶是一个重要且持续存在的问题,它阻碍了我们从人工智能进步中获益。迄今为止的文献提供的解释主要集中在有意识的反思过程。在这里,我们通过一种可以提供大量信息的无意识视角来补充这一观点。基于隐性偏见理论,在一项预先注册的研究中,我们建议人们对人工智能(AI)系统产生隐性偏见(即偏见),将其视为一种不同的、具有威胁性的“物种”,其行为未知。就像在其他偏见的情况下一样,我们期望人们受到这种隐性偏见的引导,但试图克服它。这导致人们愿意依赖算法建议(欣赏),而这种建议会随着人们对机器的隐性偏见而减少。接下来,基于躯体标记假设和可及性诊断视角,我们解释了为什么厌恶是短暂的。当人们了解算法的性能时,在决定是否依赖人工智能的建议时,他们对原始隐性偏见的依赖就会减少。使用隐性关联测试的两项研究(n1 = 675,n2 = 317)一致支持这一观点。另外两项研究(n3 = 255,n4 = 332)排除了其他解释,并为我们的主张提供了更有力的支持。研究结果最终表明,厌恶和欣赏之间的转变最初取决于人们对人工智能普遍无意识的偏见,因为没有足够的信息来克服它。他们进一步表明,在后期的使用阶段,这种转变取决于对人工智能性能诊断信息的可访问性,这减少了无意识偏见的影响。