当前位置: X-MOL 学术Chest › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Ethically Supported Framework for Determining Patient Notification and Informed Consent Practices When Using Artificial Intelligence in Health Care
Chest ( IF 9.5 ) Pub Date : 2024-05-22 , DOI: 10.1016/j.chest.2024.04.014
Susannah L Rose 1 , Devora Shapiro 2
Affiliation  

Artificial intelligence (AI) is increasingly being used in health care. Without an ethically supportable, standard approach to knowing when patients should be informed about AI, hospital systems and clinicians run the risk of fostering mistrust among their patients and the public. Therefore, hospital leaders need guidance on when to tell patients about the use of AI in their care. In this article, we provide such guidance. To determine which AI technologies fall into each of the identified categories (no notification or no informed consent [IC], notification only, and formal IC), we propose that AI use-cases should be evaluated using the following criteria: (1) AI model autonomy, (2) departure from standards of practice, (3) whether the AI model is patient facing, (4) clinical risk introduced by the model, and (5) administrative burdens. We take each of these in turn, using a case example of AI in health care to illustrate our proposed framework. As AI becomes more commonplace in health care, our proposal may serve as a starting point for creating consensus on standards for notification and IC for the use of AI in patient care.

中文翻译:


在医疗保健中使用人工智能时确定患者通知和知情同意实践的道德支持框架



人工智能 (AI) 越来越多地应用于医疗保健领域。如果没有道德上支持的标准方法来了解患者何时应该了解人工智能,医院系统和临床医生就有可能在患者和公众之间造成不信任。因此,医院领导者需要指导何时告知患者在护理中使用人工智能。在本文中,我们提供了这样的指导。为了确定哪些人工智能技术属于每个已确定的类别(无通知或无知情同意 [IC]、仅通知和正式 IC),我们建议应使用以下标准评估人工智能用例:(1) 人工智能模型自主权,(2) 偏离实践标准,(3) 人工智能模型是否面向患者,(4) 模型引入的临床风险,以及 (5) 管理负担。我们依次考虑其中的每一个,并使用医疗保健领域的人工智能案例来说明我们提出的框架。随着人工智能在医疗保健领域变得越来越普遍,我们的提案可以作为就人工智能在患者护理中使用的通知和 IC 标准达成共识的起点。
更新日期:2024-05-22
down
wechat
bug