当前位置: X-MOL 学术Crit. Care › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Artificial intelligence in acute medicine: a call to action
Critical Care ( IF 8.8 ) Pub Date : 2024-07-29 , DOI: 10.1186/s13054-024-05034-7
Maurizio Cecconi 1, 2 , Massimiliano Greco 1, 2 , Benjamin Shickel 3 , Jean-Louis Vincent 4 , Azra Bihorac 3
Affiliation  

On November 30, 2022, OpenAI released ChatGPT, the first chatbot and virtual assistant powered by large language models (LLMs). In just five days, ChatGPT attracted over 1 million users and reached 200 million monthly active users worldwide within fifteen months. This sudden surge of interest in artificial intelligence (AI) has expanded its potential from a niche concept to a mainstream obsession.

AI and machine learning were already making strides in medicine and healthcare, but with the advent of prescriptive and generative AI, new opportunities emerged to redefine how healthcare professionals diagnose, treat, and monitor patients [1]. AI has the potential to enhance diagnostic precision and provide personalized care by bridging the gap between digitalized medical data, clinical decisions, and optimized healthcare delivery.

The term “Augmented Intelligence” may be more fitting than “Artificial Intelligence,” as it emphasizes AI’s role as a collaborator that enhances human intelligence rather than replacing it. As large language models become more advanced, it is important to address the technical, ethical, social, and practical challenges they present.

AI’s role is evolving from a mere tool to an assistant and potentially to a colleague. Just as human colleagues are expected to adhere to strict ethical and professional guidelines, AI systems must also be designed with similar standards in mind to support healthcare professionals and maintain integrity and trust in clinical settings.

Establishing clear guidelines and regulations for augmented intelligence will be vital for integrating AI into healthcare teams [2]. This ensures that AI enhances care delivery in a safe, reliable, and trustworthy manner without compromising patient safety and autonomy and that it benefits all communities, including those in low-resource settings and minority groups.

This insight is derived from the collaborative perspectives of 22 experts from a 3-day international AI roundtable at the ISICEM conference in Brussels in March 2024. It sheds light on the current situation and challenges regarding AI in acute medicine and urges stakeholders to work together to leverage AI-enabled care and expand acute medicine’s reach.

Several papers have demonstrated that predictive models could recognize patterns or identify early warning signs of critical conditions, potentially leading to more timely interventions and improved patient outcomes [3].

AI systems can combine data from diverse sources such as imaging, electronic health records, and wearable devices to offer a holistic view of a patient’s condition. They can also help extract usable information from the current data overload that everyone in healthcare is exposed to. AI systems can then help make clinical decisions that align with real-world complexities and patient-specific needs, providing healthcare professionals with a comprehensive understanding that improves care delivery.

AI systems could also streamline note-taking, documentation, and correspondence between healthcare providers and patients [4]. Additionally, AI could help in research. Current research in acute settings has several limitations. Populations of acutely ill patients are highly heterogeneous, and diseases are exceptionally dynamic. Unsurprisingly, randomized controlled trials have often failed to show positive results. AI could significantly improve trial design and execution by offering new ways to address these challenges. AI could identify precise patient phenotypes for accurate inclusion criteria, ensuring that trials enroll the most suitable participants [5]. It can also assist in real-time monitoring of trial participants, providing early signals of efficacy or adverse effects. Additionally, predictive models can help adapt trial designs dynamically, allowing investigators to adjust interventions based on emerging data.

Creating digital twins of patients and healthcare systems will enable researchers and clinicians to simulate potential outcomes, optimize resource allocation, and effectively guide care delivery. By developing accurate, data-driven digital twins, healthcare professionals can conduct controlled experiments and identify the best strategies to deliver truly personalized precision medicine. This digital “dry run” could reduce the risks and costs of testing novel treatments in vulnerable patient populations. Before jumping to these implementations, however, further research must prove how AI models can truly discriminate association from causality or how they can help investigators reduce uncertainty in their models, make trial design more efficient, and, ultimately, improve clinical outcomes [6].

Research has shown that AI can predict clinical trajectories in research settings but moving towards actionable AI or AI-enabled care, where insights directly guide clinical decisions in real-time, remains a significant challenge.

Establishing standardized data frameworks and promoting their adoption is vital to facilitating the seamless exchange of healthcare data across systems. Data fragmentation obstructs the development of robust AI models and hinders their smooth integration into clinical workflows.

In ICUs, for instance, patients often present a wide range of conditions, making it challenging to classify their medical phenotypes without detailed patient data. Real-time data gathering and analysis are crucial to effectively identify individual patient phenotypes, a practice that is not commonly implemented. The establishment of collaborative real-time data networks is essential, as no single ICU can independently gather all necessary information.

AI-based clinical decision support systems often lack situational awareness due to limited training in replicating real-world clinical decision-making processes. This gap can hinder AI systems from understanding clinical context and providing valuable input for clinical decision-making.

Concerns about privacy, data security, and transparency can be alarming for patients, families, healthcare organizations, and governments.

Clinician acceptance is hindered by the ‘black box’ problem, where models are not easily interpretable. Deep learning models may require more transparency, leading to skepticism among clinicians who need help understanding how a system arrives at its conclusions.”

Overcoming these challenges necessitates a comprehensive framework prioritizing the following core elements:

  1. 1.

    Social Contract for AI: Develop a social contract with input from clinicians, data experts, policymakers, patients, and families to ensure that AI tools respect patient rights and autonomy while upholding ethical standards.

  2. 2.

    Human-Centric AI Development: Empower rather than replace healthcare professionals. Systems must be designed to enhance clinical decision-making while maintaining the clinician-patient relationship [7]. Do this most inclusively, driving improvements for everyone.

  3. 3.

    Data Standardization and Infrastructure: Establish unified standards and infrastructure to enable seamless data sharing and foster collaboration. For instance, OMOP, FHIR, and i2b2 can play pivotal roles in creating robust data structures that support AI integration.

  4. 4.

    Federated Real-Time Networks: Create real-time clinical research networks to enhance collaboration and enable data aggregation to study rare events. This will also improve phenotyping and allow clinicians to tailor treatments based on precise patient subtypes, moving toward a personalized and actionable AI model.

  5. 5.

    Education and Training: Provide healthcare professionals with the training to utilize AI tools effectively, understand their strengths and limitations, understand and accept uncertainty, and interpret probabilistic information. At the same time, build a “learn while doing” culture with AI-augmented human systems continuously improving their models as they analyze new data and adapt to changing clinical landscapes.

  6. 6.

    Collaborative Research and Development: Encourage partnerships between the public and private sectors to drive research that addresses critical needs in acute and critical care. This should be inclusive, with a specific focus on not leaving behind minorities and low-resource settings.

Integrating AI in medicine can transform healthcare delivery, but achieving this vision requires a concerted effort—Fig. 1. Stakeholders must embrace a unified approach to AI integration, advocating for robust data infrastructures, ethical frameworks, and collaborative networks that can harness the full potential of AI. By focusing on data standardization, real-time ICU networks, education, and working on a new “social contract for AI between all stakeholders,” we can move toward a future where AI-enabled care brings acute medicine where and when it is needed, ultimately improving patient outcomes, and enhancing the clinician-patient relationship.

Fig. 1
figure 1

DALL-E interpretation of the viewpoint. DALLE Open AI 2024

Full size image

Maurizio Cecconi, Massimiliano Greco, Benjamin Shickel, Derek Angus, Heatherlee Bailey, Elena Bignami, Thierry Calandra, Leo Anthony Celi, Sharon Einav, Paul Elbers, Ali Ercole, Hernando Gomez, Michelle NG Gong, Matthieu Komorowski, Vincent Liu, Soojin Park Aarti Sarwal, Christopher Seymour, Fernando Zampieri, Fabio Silvio Taccone, Jean-Louis Vincent, Azra Bihorac.

No datasets were generated or analysed during the current study.

  1. The AI doctor. will see you… eventually. The Economist; 2024.

  2. Pinsky MR, Bedoya A, Bihorac A, Celi L, Churpek M, Economou-Zavlanos NJ, Elbers P, Saria S, Liu V, Lyons PG, et al. Use of artificial intelligence in critical care: opportunities and obstacles. Crit Care. 2024;28(1):113.

    Article PubMed PubMed Central Google Scholar

  3. Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AA. The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care. Nat Med. 2018;24(11):1716–20.

    Article CAS PubMed Google Scholar

  4. Komorowski M, del Pilar Arias López M, Chang AC. How could ChatGPT impact my practice as an intensivist? An overview of potential applications, risks and limitations. Intensive Care Med. 2023;49(7):844–7.

    Article PubMed Google Scholar

  5. Angus DC. Randomized clinical trials of Artificial Intelligence. JAMA. 2020;323(11):1043–5.

    Article PubMed Google Scholar

  6. Messeri L, Crockett MJ. Artificial intelligence and illusions of understanding in scientific research. Nature. 2024;627(8002):49–58.

    Article CAS PubMed Google Scholar

  7. Cecconi M. Reflections of an intensivist in 2050: three decades of clinical practice, research, and human connection. Crit Care. 2023;27(1):391.

    Article PubMed PubMed Central Google Scholar

Download references

The authors did not receive support from any organization for the submitted work.

Authors and Affiliations

  1. Humanitas University, Milan, Italy

    Maurizio Cecconi & Massimiliano Greco

  2. IRCCS Humanitas Research Hospital, Milan, Italy

    Maurizio Cecconi & Massimiliano Greco

  3. University of Florida, Gainesville, USA

    Benjamin Shickel & Azra Bihorac

  4. Erasme University Hospital, HUB, Université Libre de Bruxelles, Brussels, Belgium

    Jean-Louis Vincent

Authors
  1. Maurizio CecconiView author publications

    You can also search for this author in PubMed Google Scholar

  2. Massimiliano GrecoView author publications

    You can also search for this author in PubMed Google Scholar

  3. Benjamin ShickelView author publications

    You can also search for this author in PubMed Google Scholar

  4. Jean-Louis VincentView author publications

    You can also search for this author in PubMed Google Scholar

  5. Azra BihoracView author publications

    You can also search for this author in PubMed Google Scholar

Contributions

Every author participated to the roundtable discussion in Brussels, and contributed to the writing of the manuscript.

Corresponding author

Correspondence to Maurizio Cecconi.

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cecconi, M., Greco, M., Shickel, B. et al. Artificial intelligence in acute medicine: a call to action. Crit Care 28, 258 (2024). https://doi.org/10.1186/s13054-024-05034-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13054-024-05034-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative



中文翻译:


急性医学中的人工智能:行动呼吁



2022 年 11 月 30 日,OpenAI 发布了 ChatGPT,这是第一个由大型语言模型(LLMs。在短短五天内,ChatGPT 吸引了超过 100 万用户,并在 15 个月内在全球范围内达到了 2 亿月活跃用户。对人工智能 (AI) 的兴趣突然激增,将其潜力从一个小众概念扩大到主流痴迷。


AI 和机器学习已经在医学和医疗保健领域取得了长足的进步,但随着规范性和生成式 AI 的出现,出现了重新定义医疗保健专业人员诊断、治疗和监测患者方式的新机会 [1]。AI 有可能通过弥合数字化医疗数据、临床决策和优化医疗保健服务之间的差距来提高诊断精度并提供个性化护理。


“增强智能”一词可能比“人工智能”更合适,因为它强调人工智能作为协作者的作用,可以增强而不是取代人类智能。随着大型语言模型变得越来越先进,解决它们带来的技术、道德、社会和实际挑战非常重要。


AI 的角色正在从单纯的工具演变为助手,甚至可能成为同事。正如人类同事应遵守严格的道德和专业准则一样,AI 系统的设计也必须牢记类似的标准,以支持医疗保健专业人员并在临床环境中保持诚信和信任。


为增强智能建立明确的指导方针和法规对于将 AI 集成到医疗保健团队中至关重要 [2]。这可确保 AI 以安全、可靠和值得信赖的方式增强护理服务,而不会损害患者的安全和自主性,并使所有社区受益,包括资源匮乏地区的社区和少数群体。


这一见解来自 2024 年 3 月在布鲁塞尔举行的 ISICEM 会议上为期 3 天的国际 AI 圆桌会议上 22 位专家的协作观点。它阐明了急性医学中人工智能的现状和挑战,并敦促利益相关者共同努力,利用人工智能支持的护理并扩大急性医学的覆盖范围。


几篇论文表明,预测模型可以识别模式或识别危重病症的早期预警信号,从而有可能实现更及时的干预和改善患者预后 [3]。


AI 系统可以结合来自不同来源的数据,例如成像、电子健康记录和可穿戴设备,以提供患者病情的整体视图。它们还可以帮助从医疗保健领域的每个人都面临的当前数据过载中提取可用信息。然后,AI 系统可以帮助做出符合现实世界复杂性和患者特定需求的临床决策,为医疗保健专业人员提供全面的理解,从而改善护理服务。


AI 系统还可以简化医疗保健提供者和患者之间的记笔记、记录和通信 [4]。此外,人工智能可以帮助研究。目前对急性地区的研究有几个局限性。急症患者群体高度异质性,疾病异常动态。不出所料,随机对照试验往往未能显示出积极的结果。AI 可以通过提供应对这些挑战的新方法来显著改善试验设计和执行。AI 可以识别精确的患者表型以获得准确的纳入标准,确保试验招募最合适的参与者 [5]。它还可以帮助实时监测试验参与者,提供疗效或不良反应的早期信号。此外,预测模型可以帮助动态调整试验设计,使研究人员能够根据新出现的数据调整干预措施。


创建患者和医疗保健系统的数字孪生将使研究人员和临床医生能够模拟潜在结果、优化资源分配并有效指导护理服务。通过开发准确的数据驱动型数字孪生,医疗保健专业人员可以进行受控实验并确定最佳策略,以提供真正个性化的精准医疗。这种数字“试运行”可以降低在弱势患者群体中测试新疗法的风险和成本。然而,在跳转到这些实施之前,进一步的研究必须证明 AI 模型如何真正区分关联和因果关系,或者它们如何帮助研究人员减少模型中的不确定性,提高试验设计效率,并最终改善临床结果 [6]。


研究表明,AI 可以预测研究环境中的临床轨迹,但转向可操作的 AI 或 AI 支持的护理仍然是一项重大挑战,其中洞察直接实时指导临床决策。


建立标准化数据框架并促进其采用对于促进跨系统无缝交换医疗保健数据至关重要。数据碎片化阻碍了稳健 AI 模型的开发,并阻碍了它们顺利集成到临床工作流程中。


例如,在 ICU 中,患者通常表现出各种各样的疾病,这使得在没有详细患者数据的情况下对其医学表型进行分类具有挑战性。实时数据收集和分析对于有效识别个体患者表型至关重要,而这种做法并不普遍实施。建立协作式实时数据网络至关重要,因为没有一个 ICU 可以独立收集所有必要的信息。


由于复制真实世界临床决策过程的培训有限,基于 AI 的临床决策支持系统通常缺乏态势感知能力。这种差距可能会阻碍 AI 系统理解临床环境并为临床决策提供有价值的输入。


对隐私、数据安全性和透明度的担忧可能会让患者、家庭、医疗保健组织和政府感到担忧。


“黑匣子”问题阻碍了临床医生的接受,其中模型不容易解释。深度学习模型可能需要更高的透明度,这导致临床医生持怀疑态度,他们需要帮助来了解系统是如何得出结论的。


克服这些挑战需要一个全面的框架,优先考虑以下核心要素:

  1. 1.


    AI 的社会契约:根据临床医生、数据专家、政策制定者、患者和家庭的意见制定社会契约,以确保 AI 工具尊重患者的权利和自主权,同时维护道德标准。

  2. 2.


    以人为本的 AI 开发:赋予而不是取代医疗保健专业人员。系统的设计必须能够增强临床决策,同时保持临床医患关系 [7]。以最具包容性的方式做到这一点,为每个人推动改进。

  3. 3.


    数据标准化和基础设施:建立统一的标准和基础设施,以实现无缝数据共享并促进协作。例如,OMOP、FHIR 和 i2b2 可以在创建支持 AI 集成的强大数据结构方面发挥关键作用。

  4. 4.


    联合实时网络:创建实时临床研究网络以增强协作并支持数据聚合以研究罕见事件。这也将改进表型分析,并允许临床医生根据精确的患者亚型定制治疗,从而朝着个性化和可操作的 AI 模型迈进。

  5. 5.


    教育和培训:为医疗保健专业人员提供培训,以有效利用 AI 工具,了解其优势和局限性,理解和接受不确定性,以及解释概率信息。同时,通过 AI 增强的人类系统在分析新数据并适应不断变化的临床环境时不断改进其模型,从而建立一种“边做边学”的文化。

  6. 6.


    合作研发:鼓励公共和私营部门之间的伙伴关系,以推动解决急症和重症监护关键需求的研究。这应该是包容性的,特别关注不让少数群体和资源匮乏的环境掉队。


将 AI 集成到医学中可以改变医疗保健服务,但要实现这一愿景需要齐心协力——图 1。利益相关者必须采用统一的 AI 集成方法,倡导强大的数据基础设施、道德框架和协作网络,以充分利用 AI 的潜力。通过专注于数据标准化、实时 ICU 网络、教育以及致力于新的“所有利益相关者之间的 AI 社会契约”,我们可以迈向这样一个未来:AI 支持的护理可以在需要的地方和时间提供急性药物,最终改善患者预后,并加强临床医患关系。

 图 1
figure 1


DALL-E 对视点的解释。来自 Open FW 2024

 全尺寸图像


毛里齐奥·切科尼、马西米利亚诺·格雷科、本杰明·希克尔、德里克·安格斯、希瑟利·贝利、埃琳娜·比格纳米、蒂埃里·卡兰德拉、里奥·安东尼·塞利、莎朗·埃纳夫、保罗·埃尔伯斯、阿里·埃尔科莱、埃尔南多·戈麦斯、米歇尔·吴龚、马修·科莫罗夫斯基、文森特·刘、朴秀珍·阿尔蒂·萨瓦尔、克里斯托弗·西摩、费尔南多·赞皮耶里、法比奥·西尔维奥·塔科内、让-路易·文森特、阿兹拉·比霍拉克。


在当前研究期间没有生成或分析数据集。


  1. AI 医生。再见...最终。《经济学人》;2024.


  2. Pinsky MR、Bedoya A、Bihorac A、Celi L、Churpek M、Economou-Zavlanos NJ、Elbers P、Saria S、Liu V、Lyons PG 等人。人工智能在重症监护中的应用:机遇和障碍。暴击护理。2024;28(1):113.


    文章: PubMed PubMed Central Google Scholar


  3. Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AA.人工智能临床医生在重症监护中学习脓毒症的最佳治疗策略。国家医学 2018;24(11):1716–20.


    论文 CAS PubMed Google Scholar


  4. Komorowski M, del Pilar Arias López M, Chang AC.ChatGPT 如何影响我作为重症监护医师的实践?潜在应用、风险和限制概述。重症监护医学 2023;49(7):844–7.


    文章 PubMed 谷歌学术


  5. 安格斯 DC。人工智能的随机临床试验。美国医学会。2020;323(11):1043–5.


    文章 PubMed 谷歌学术


  6. 梅塞里 L,克罗克特 MJ。人工智能和科学研究中的理解幻觉。自然界。2024;627(8002):49–58.


    论文 CAS PubMed Google Scholar


  7. Cecconi M. 2050 年重症监护医师的反思:三十年的临床实践、研究和人际关系。暴击护理。2023;27(1):391.


    文章: PubMed PubMed Central Google Scholar

 下载参考资料


作者没有得到任何组织对提交工作的支持。

 作者和单位


  1. Humanitas University, 意大利, 米兰


    毛里齐奥·切科尼 & 马西米利亚诺·格雷科


  2. IRCCS Humanitas Research Hospital, 意大利 米兰


    毛里齐奥·切科尼 & 马西米利亚诺·格雷科


  3. 美国佛罗里达大学盖恩斯维尔分校


    本杰明·希克尔 & 阿兹拉·比霍拉克


  4. 比利时布鲁塞尔自由大学 HUB 伊拉斯梅大学医院

     让-路易·文森特

 作者

  1. 毛里齐奥·切科尼查看作者出版物


    您也可以在 PubMed Google Scholar 中搜索此作者


  2. 马西米利亚诺·格列柯查看作者出版物


    您也可以在 PubMed Google Scholar 中搜索此作者


  3. 本杰明·施克尔查看作者出版物


    您也可以在 PubMed Google Scholar 中搜索此作者


  4. 让-路易·文森特查看作者出版物


    您也可以在 PubMed Google Scholar 中搜索此作者


  5. 阿兹拉·比霍拉克查看作者出版物


    您也可以在 PubMed Google Scholar 中搜索此作者

 贡献


每位作者都参加了在布鲁塞尔举行的圆桌讨论,并为手稿的撰写做出了贡献。

 通讯作者


与毛里齐奥·切科尼 (Maurizio Cecconi) 的通信。


道德批准和参与同意

 不適用。

 利益争夺


作者声明没有利益冲突。

 出版商注


施普林格·自然 (Springer Nature) 对已发布的地图和机构隶属关系中的管辖权主张保持中立。


开放获取本文根据 Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License 获得许可,该许可允许以任何媒体或格式进行任何非商业用途、共享、分发和复制,前提是您给予原作者和来源适当的署名,提供指向 Creative Commons 许可的链接,并说明您是否修改了许可材料。根据本许可,您无权共享源自本文或其部分的改编材料。本文中的图像或其他第三方材料包含在文章的知识共享许可中,除非在材料的致谢行中另有说明。如果材料未包含在文章的 Creative Commons 许可中,并且您的预期用途未被法律法规允许或超出允许的用途,您将需要直接从版权所有者处获得许可。要查看此许可证的副本,请访问 http://creativecommons.org/licenses/by-nc-nd/4.0/。

 重印本和权限

Check for updates. Verify currency and authenticity via CrossMark

 引用本文

 下载引文


  • 收稿日期:


  • 录用日期:


  • 出版日期:

 使用本文


您与之共享以下链接的任何人都可以阅读此内容:


抱歉,本文目前没有可共享链接。


由 Springer Nature SharedIt 内容共享计划提供

更新日期:2024-07-29
down
wechat
bug