当前位置: X-MOL 学术Br. J. Psychiatry › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Artificial intelligence and increasing misinformation
The British Journal of Psychiatry ( IF 8.7 ) Pub Date : 2023-10-26 , DOI: 10.1192/bjp.2023.136
Scott Monteith 1 , Tasha Glenn 2 , John R Geddes 3 , Peter C Whybrow 4 , Eric Achtyes 5 , Michael Bauer 6
Affiliation  

With the recent advances in artificial intelligence (AI), patients are increasingly exposed to misleading medical information. Generative AI models, including large language models such as ChatGPT, create and modify text, images, audio and video information based on training data. Commercial use of generative AI is expanding rapidly and the public will routinely receive messages created by generative AI. However, generative AI models may be unreliable, routinely make errors and widely spread misinformation. Misinformation created by generative AI about mental illness may include factual errors, nonsense, fabricated sources and dangerous advice. Psychiatrists need to recognise that patients may receive misinformation online, including about medicine and psychiatry.



中文翻译:


人工智能和日益增加的错误信息



随着人工智能(AI)的最新进展,患者越来越多地接触到误导性的医疗信息。生成式 AI 模型,包括 ChatGPT 等大型语言模型,基于训练数据创建和修改文本、图像、音频和视频信息。生成式人工智能的商业用途正在迅速扩大,公众将经常收到生成式人工智能创建的消息。然而,生成式人工智能模型可能不可靠,经常出错并广泛传播错误信息。生成式人工智能产生的有关精神疾病的错误信息可能包括事实错误、无稽之谈、捏造的消息来源和危险的建议。精神科医生需要认识到,患者可能会在网上收到错误信息,包括有关医学和精神病学的信息。

更新日期:2023-10-26
down
wechat
bug