当前位置:
X-MOL 学术
›
npj Digit. Med.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Simulated misuse of large language models and clinical credit systems
npj Digital Medicine ( IF 12.4 ) Pub Date : 2024-11-11 , DOI: 10.1038/s41746-024-01306-2 James T. Anibal, Hannah B. Huth, Jasmine Gunkel, Susan K. Gregurick, Bradford J. Wood
中文翻译:
模拟滥用大型语言模型和临床信用系统
更新日期:2024-11-11
npj Digital Medicine ( IF 12.4 ) Pub Date : 2024-11-11 , DOI: 10.1038/s41746-024-01306-2 James T. Anibal, Hannah B. Huth, Jasmine Gunkel, Susan K. Gregurick, Bradford J. Wood
In the future, large language models (LLMs) may enhance the delivery of healthcare, but there are risks of misuse. These methods may be trained to allocate resources via unjust criteria involving multimodal data - financial transactions, internet activity, social behaviors, and healthcare information. This study shows that LLMs may be biased in favor of collective/systemic benefit over the protection of individual rights and could facilitate AI-driven social credit systems.
中文翻译:
模拟滥用大型语言模型和临床信用系统
未来,大型语言模型 (LLMs) 可能会增强医疗保健的提供,但存在滥用的风险。这些方法可能经过训练,通过涉及多模式数据(金融交易、互联网活动、社交行为和医疗保健信息)的不公正标准来分配资源。这项研究表明,LLMs 可能偏向于集体/系统利益而不是保护个人权利,并可能促进人工智能驱动的社会信用系统。