当前位置: X-MOL 学术Annu. Rev. Psychol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Moral Psychology of Artificial Intelligence
Annual Review of Psychology ( IF 23.6 ) Pub Date : 2023-09-19 , DOI: 10.1146/annurev-psych-030123-113559
Jean-François Bonnefon 1 , Iyad Rahwan 2 , Azim Shariff 3
Affiliation  

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

中文翻译:


人工智能的道德心理学



道德心理学是围绕三类代理人和病人形成的:人类、其他动物和超自然生物。人工智能的快速发展为我们的道德心理学引入了第四类问题:智能机器。机器可以作为道德代理人,做出影响人类患者结果的决定,或者在没有人类监督的情况下解决道德困境。机器可以被视为道德病人,其结果可能会受到人类决策的影响,从而对人机合作产生重要影响。机器可以成为人类代理和患者作为道德互动的代表发送的道德代理,也可以用作这些互动中的伪装。在这里,我们回顾了关于机器作为道德代理人、道德病人和道德代理人的实验文献,重点是最近的发现和它们提出的开放性问题。
更新日期:2023-09-19
down
wechat
bug