当前位置: X-MOL 学术Journal of Experimental Psychology: General › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
When the bot walks the talk: Investigating the foundations of trust in an artificial intelligence (AI) chatbot.
Journal of Experimental Psychology: General ( IF 3.7 ) Pub Date : 2024-12-05 , DOI: 10.1037/xge0001696
Fanny Lalot,Anna-Marie Bertram

The concept of trust in artificial intelligence (AI) has been gaining increasing relevance for understanding and shaping human interaction with AI systems. Despite a growing literature, there are disputes as to whether the processes of trust in AI are similar to that of interpersonal trust (i.e., in fellow humans). The aim of the present article is twofold. First, we provide a systematic test of an integrative model of trust inspired by interpersonal trust research encompassing trust, its antecedents (trustworthiness and trust propensity), and its consequences (intentions to use the AI and willingness to disclose personal information). Second, we investigate the role of AI personalization on trust and trustworthiness, considering both their mean levels and their dynamic relationships. In two pilot studies (N = 313) and one main study (N = 1,001) focusing on AI chatbots, we find that the integrative model of trust is suitable for the study of trust in virtual AI. Perceived trustworthiness of the AI, and more specifically its ability and integrity dimensions, is a significant antecedent of trust and so are anthropomorphism and propensity to trust smart technology. Trust, in turn, leads to greater intentions to use and willingness to disclose information to the AI. The personalized AI chatbot was perceived as more able and benevolent than the impersonal chatbot. It was also more anthropomorphized and led to greater usage intentions, but not to greater trust. Anthropomorphism, not trust, explained the greater intentions to use personalized AI. We discuss implications for research on trust in humans and in automation. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

中文翻译:


当机器人言出必行时:调查人工智能 (AI) 聊天机器人的信任基础。



人工智能 (AI) 中的信任概念在理解和塑造人类与 AI 系统的交互方面越来越重要。尽管文献越来越多,但关于人工智能中的信任过程是否类似于人际信任(即在人类同胞中)的过程,仍然存在争议。本文的目的有两个。首先,我们提供了受人际信任研究启发的综合信任模型的系统测试,包括信任、其前因(可信度和信任倾向)及其后果(使用 AI 的意图和披露个人信息的意愿)。其次,我们研究了 AI 个性化对信任和可信度的作用,同时考虑了它们的平均水平和动态关系。在两项针对 AI 聊天机器人的试点研究 (N = 313) 和一项主要研究 (N = 1,001) 中,我们发现信任的综合模型适用于虚拟 AI 中的信任研究。AI 的感知可信度,更具体地说,它的能力和完整性维度,是信任的重要前提,拟人化和信任智能技术的倾向也是如此。反过来,信任会导致更大的使用意愿和向 AI 披露信息的意愿。个性化的 AI 聊天机器人被认为比没有人情味的聊天机器人更有能力和仁慈。它也更加拟人化,导致了更大的使用意图,但没有带来更大的信任。拟人化,而不是信任,解释了使用个性化 AI 的更大意图。我们讨论了对人类信任和自动化研究的影响。(PsycInfo 数据库记录 (c) 2024 APA,保留所有权利)。
更新日期:2024-12-05
down
wechat
bug