当前位置: X-MOL 学术New Media & Society › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Trust it or not: Understanding users’ motivations and strategies for assessing the credibility of AI-generated information
New Media & Society ( IF 4.5 ) Pub Date : 2024-11-08 , DOI: 10.1177/14614448241293154
Mengxue Ou, Han Zheng, Yueliang Zeng, Preben Hansen

The evolution of artificial intelligence (AI) facilitates the creation of multimodal information of mixed quality, intensifying the challenges individuals face when assessing information credibility. Through in-depth interviews with users of generative AI platforms, this study investigates the underlying motivations and multidimensional approaches people use to assess the credibility of AI-generated information. Four major motivations driving users to authenticate information are identified: expectancy violation, task features, personal involvement, and pre-existing attitudes. Users evaluate AI-generated information’s credibility using both internal (e.g. relying on AI affordances, content integrity, and subjective expertise) and external approaches (e.g. iterative interaction, cross-validation, and practical testing). Theoretical and practical implications are discussed in the context of AI-generated content assessment.

中文翻译:


信不信由你:了解用户的动机和评估 AI 生成信息可信度的策略



人工智能 (AI) 的发展促进了混合质量的多模式信息的创建,加剧了个人在评估信息可信度时面临的挑战。通过对生成式 AI 平台的用户进行深入访谈,本研究调查了人们用来评估 AI 生成信息的可信度的潜在动机和多维方法。确定了驱使用户验证信息的四个主要动机:期望违例、任务特征、个人参与和预先存在的态度。用户使用内部方法(例如依赖 AI 功能、内容完整性和主观专业知识)和外部方法(例如迭代交互、交叉验证和实际测试)来评估 AI 生成的信息的可信度。在 AI 生成内容评估的背景下讨论了理论和实践意义。
更新日期:2024-11-08
down
wechat
bug