Nature Machine Intelligence ( IF 18.8 ) Pub Date : 2024-11-26 , DOI: 10.1038/s42256-024-00926-3 Artem A. Trotsyuk, Quinn Waeiss, Raina Talwar Bhatia, Brandon J. Aponte, Isabella M. L. Heffernan, Devika Madgavkar, Ryan Marshall Felder, Lisa Soleymani Lehmann, Megan J. Palmer, Hank Greely, Russell Wald, Lea Goetz, Markus Trengove, Robert Vandersluis, Herbert Lin, Mildred K. Cho, Russ B. Altman, Drew Endy, David A. Relman, Margaret Levi, Debra Satz, David Magnus
The rapid advancement of artificial intelligence (AI) in biomedical research presents considerable potential for misuse, including authoritarian surveillance, data misuse, bioweapon development, increase in inequity and abuse of privacy. We propose a multi-pronged framework for researchers to mitigate these risks, looking first to existing ethical frameworks and regulatory measures researchers can adapt to their own work, next to off-the-shelf AI solutions, then to design-specific solutions researchers can build into their AI to mitigate misuse. When researchers remain unable to address the potential for harmful misuse, and the risks outweigh potential benefits, we recommend researchers consider a different approach to answering their research question, or a new research question if the risks remain too great. We apply this framework to three different domains of AI research where misuse is likely to be problematic: (1) AI for drug and chemical discovery; (2) generative models for synthetic data; (3) ambient intelligence.
中文翻译:
迈向降低人工智能在生物医学研究中可能滥用的风险的框架
人工智能 (AI) 在生物医学研究中的快速发展带来了相当大的滥用潜力,包括专制监控、数据滥用、生物武器开发、不平等的增加和滥用隐私。我们为研究人员提出了一个多管齐下的框架来减轻这些风险,首先着眼于研究人员可以根据自己的工作适应的现有道德框架和监管措施,其次是现成的 AI 解决方案,然后是研究人员可以在他们的 AI 中构建的特定设计解决方案,以减少滥用。当研究人员仍然无法解决有害滥用的可能性,并且风险大于潜在好处时,我们建议研究人员考虑采用不同的方法来回答他们的研究问题,或者如果风险仍然太大,则考虑新的研究问题。我们将此框架应用于人工智能研究的三个不同领域,在这些领域中,滥用可能会带来问题:(1) 人工智能用于药物和化学发现;(2) 合成数据的生成模型;(3) 环境智能。