当前位置: X-MOL 学术BMJ Mental Health › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
AI depictions of psychiatric diagnoses: a preliminary study of generative image outputs in Midjourney V.6 and DALL-E 3.
BMJ Mental Health ( IF 6.6 ) Pub Date : 2024-12-04 , DOI: 10.1136/bmjment-2024-301298
Matthew Flathers,Griffin Smith,Ellen Wagner,Carl Erik Fisher,John Torous

OBJECTIVE This paper investigates how state-of-the-art generative artificial intelligence (AI) image models represent common psychiatric diagnoses. We offer key lessons derived from these representations to inform clinicians, researchers, generative AI companies, policymakers and the public about the potential impacts of AI-generated imagery on mental health discourse. METHODS We prompted two generative AI image models, Midjourney V.6 and DALL-E 3 with isolated diagnostic terms for common mental health conditions. The resulting images were compiled and presented as examples of current AI behaviour when interpreting psychiatric terminology. FINDINGS The AI models generated image outputs for most psychiatric diagnosis prompts. These images frequently reflected cultural stereotypes and historical visual tropes including gender biases and stigmatising portrayals of certain mental health conditions. DISCUSSION These findings illustrate three key points. First, generative AI models reflect cultural perceptions of mental disorders rather than evidence-based clinical ones. Second, AI image outputs resurface historical biases and visual archetypes. Third, the dynamic nature of these models necessitates ongoing monitoring and proactive engagement to manage evolving biases. Addressing these challenges requires a collaborative effort among clinicians, AI developers and policymakers to ensure the responsible use of these technologies in mental health contexts. CLINICAL IMPLICATIONS As these technologies become increasingly accessible, it is crucial for mental health professionals to understand AI's capabilities, limitations and potential impacts. Future research should focus on quantifying these biases, assessing their effects on public perception and developing strategies to mitigate potential harm while leveraging the insights these models provide into collective understandings of mental illness.

中文翻译:


精神病诊断的 AI 描述:Midjourney V.6 和 DALL-E 3 中生成图像输出的初步研究。



目的 本文研究了最先进的生成式人工智能 (AI) 图像模型如何代表常见的精神病学诊断。我们提供从这些表示中得出的关键经验教训,以告知临床医生、研究人员、生成式 AI 公司、政策制定者和公众 AI 生成的图像对心理健康话语的潜在影响。方法 我们提示了两个生成式 AI 图像模型,Midjourney V.6 和 DALL-E 3,其中包含常见心理健康状况的孤立诊断术语。在解释精神病学术语时,生成的图像被编译并作为当前 AI 行为的示例呈现。发现 AI 模型为大多数精神病学诊断提示生成了图像输出。这些图像经常反映文化刻板印象和历史视觉比喻,包括性别偏见和对某些心理健康状况的污名化描绘。讨论 这些发现说明了三个关键点。首先,生成式 AI 模型反映了对精神障碍的文化看法,而不是基于证据的临床看法。其次,AI 图像输出重新揭示了历史偏见和视觉原型。第三,这些模型的动态性质需要持续监控和主动参与,以管理不断变化的偏见。应对这些挑战需要临床医生、AI 开发人员和政策制定者之间的共同努力,以确保在心理健康环境中负责任地使用这些技术。临床影响 随着这些技术变得越来越容易获得,心理健康专业人员了解 AI 的能力、局限性和潜在影响至关重要。 未来的研究应侧重于量化这些偏见,评估它们对公众认知的影响,并制定减轻潜在伤害的策略,同时利用这些模型提供的见解来理解对精神疾病的集体理解。
更新日期:2024-12-04
down
wechat
bug