当前位置:
X-MOL 学术
›
Perspect. Psychol. Sci.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Social Preferences Toward Humans and Machines: A Systematic Experiment on the Role of Machine Payoffs.
Perspectives on Psychological Science ( IF 10.5 ) Pub Date : 2023-09-26 , DOI: 10.1177/17456916231194949 Alicia von Schenk 1, 2 , Victor Klockmann 1, 2 , Nils Köbis 1
Perspectives on Psychological Science ( IF 10.5 ) Pub Date : 2023-09-26 , DOI: 10.1177/17456916231194949 Alicia von Schenk 1, 2 , Victor Klockmann 1, 2 , Nils Köbis 1
Affiliation
There is growing interest in the field of cooperative artificial intelligence (AI), that is, settings in which humans and machines cooperate. By now, more than 160 studies from various disciplines have reported on how people cooperate with machines in behavioral experiments. Our systematic review of the experimental instructions reveals that the implementation of the machine payoffs and the information participants receive about them differ drastically across these studies. In an online experiment (N = 1,198), we compare how these different payoff implementations shape people's revealed social preferences toward machines. When matched with machine partners, people reveal substantially stronger social preferences and reciprocity when they know that a human beneficiary receives the machine payoffs than when they know that no such "human behind the machine" exists. When participants are not informed about machine payoffs, we found weak social preferences toward machines. Comparing survey answers with those from a follow-up study (N = 150), we conclude that people form their beliefs about machine payoffs in a self-serving way. Thus, our results suggest that the extent to which humans cooperate with machines depends on the implementation and information about the machine's earnings.
中文翻译:
对人类和机器的社会偏好:关于机器收益作用的系统实验。
人们对协作人工智能(AI)领域越来越感兴趣,即人类与机器合作的环境。到目前为止,已有 160 多项来自不同学科的研究报告了人们如何在行为实验中与机器合作。我们对实验说明的系统审查表明,这些研究中机器收益的实施以及参与者收到的有关它们的信息存在巨大差异。在一项在线实验(N = 1,198)中,我们比较了这些不同的支付实现如何塑造人们对机器的显性社会偏好。当与机器合作伙伴匹配时,当人们知道人类受益人收到机器收益时,他们会表现出比当他们知道不存在这样的“机器背后的人”时更强烈的社会偏好和互惠。当参与者不知道机器收益时,我们发现对机器的社会偏好较弱。将调查答案与后续研究(N = 150)的答案进行比较,我们得出结论,人们以自私的方式形成对机器收益的信念。因此,我们的结果表明,人类与机器合作的程度取决于机器的实施和有关机器收益的信息。
更新日期:2023-09-26
中文翻译:
对人类和机器的社会偏好:关于机器收益作用的系统实验。
人们对协作人工智能(AI)领域越来越感兴趣,即人类与机器合作的环境。到目前为止,已有 160 多项来自不同学科的研究报告了人们如何在行为实验中与机器合作。我们对实验说明的系统审查表明,这些研究中机器收益的实施以及参与者收到的有关它们的信息存在巨大差异。在一项在线实验(N = 1,198)中,我们比较了这些不同的支付实现如何塑造人们对机器的显性社会偏好。当与机器合作伙伴匹配时,当人们知道人类受益人收到机器收益时,他们会表现出比当他们知道不存在这样的“机器背后的人”时更强烈的社会偏好和互惠。当参与者不知道机器收益时,我们发现对机器的社会偏好较弱。将调查答案与后续研究(N = 150)的答案进行比较,我们得出结论,人们以自私的方式形成对机器收益的信念。因此,我们的结果表明,人类与机器合作的程度取决于机器的实施和有关机器收益的信息。