Nature Medicine ( IF 58.7 ) Pub Date : 2024-10-16 , DOI: 10.1038/s41591-024-03311-0 James M. Hillis, Kenneth Payne
Artificial intelligence (AI) continues to show its potential for innovation in healthcare. As this transformation occurs, robust methods are needed to assess when and how AI should function autonomously. The scalability of AI use in healthcare will often conflict with the maintenance of human oversight, and an appropriate balance must be sought. We draw on ongoing debates in the military domain to explain why the key to effective AI lies in ‘meaningful human involvement’ and key considerations in attaining it.
The goals of combat contrast sharply with the goals of healthcare. There are nonetheless salient parallels for shared approaches to weighing autonomy. The fundamental questions in both domains are, “When, how and by whom should decisions be made?”. Both arenas demand judgement of acceptable risk to life; the careful delineation of responsibility and accountability; and, ultimately, a decision about how much autonomy should be afforded to nonhuman systems.
中文翻译:
健康 AI 需要有意义的人类参与:战争的教训
人工智能 (AI) 继续展示其在医疗保健领域的创新潜力。随着这种转变的发生,需要强大的方法来评估 AI 何时以及如何自主运行。AI 在医疗保健领域的使用可扩展性通常会与维护人工监督相冲突,因此必须寻求适当的平衡。我们借鉴军事领域正在进行的辩论来解释为什么有效 AI 的关键在于“有意义的人类参与”以及实现它的关键考虑因素。
战斗的目标与医疗保健的目标形成鲜明对比。尽管如此,权衡自主性的共同方法仍存在明显的相似之处。这两个领域的基本问题是,“何时、如何以及由谁做出决策?这两个领域都要求对可接受的生命风险进行判断;仔细界定责任和问责制;以及最终决定应该给予非人类系统多少自主权。