npj Digital Medicine ( IF 12.4 ) Pub Date : 2024-07-22 , DOI: 10.1038/s41746-024-01190-w Aurélie Pahud de Mortanges 1 , Haozhe Luo 1 , Shelley Zixin Shu 1 , Amith Kamath 1 , Yannick Suter 1, 2 , Mohamed Shelan 2 , Alexander Pöllinger 3 , Mauricio Reyes 1, 2
Explainable artificial intelligence (XAI) has experienced a vast increase in recognition over the last few years. While the technical developments are manifold, less focus has been placed on the clinical applicability and usability of systems. Moreover, not much attention has been given to XAI systems that can handle multimodal and longitudinal data, which we postulate are important features in many clinical workflows. In this study, we review, from a clinical perspective, the current state of XAI for multimodal and longitudinal datasets and highlight the challenges thereof. Additionally, we propose the XAI orchestrator, an instance that aims to help clinicians with the synopsis of multimodal and longitudinal data, the resulting AI predictions, and the corresponding explainability output. We propose several desirable properties of the XAI orchestrator, such as being adaptive, hierarchical, interactive, and uncertainty-aware.
中文翻译:
为医学成像中的多模式和纵向数据编排可解释的人工智能
在过去几年中,可解释人工智能(XAI)的认知度大幅提高。尽管技术发展是多方面的,但对系统的临床适用性和可用性的关注较少。此外,对于能够处理多模式和纵向数据的 XAI 系统并没有给予太多关注,我们假设这些系统是许多临床工作流程中的重要特征。在这项研究中,我们从临床角度回顾了多模态和纵向数据集的 XAI 现状,并强调了其面临的挑战。此外,我们还提出了 XAI 协调器,该实例旨在帮助临床医生汇总多模态和纵向数据、由此产生的 AI 预测以及相应的可解释性输出。我们提出了 XAI 协调器的几个理想属性,例如自适应、分层、交互和不确定性感知。