当前位置: X-MOL 学术J. Healthc. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Interpretability of Clinical Decision Support Systems Based on Artificial Intelligence from Technological and Medical Perspective: A Systematic Review
Journal of Healthcare Engineering Pub Date : 2023-2-3 , DOI: 10.1155/2023/9919269
Qian Xu 1, 2, 3, 4, 5 , Wenzhao Xie 4 , Bolin Liao 3 , Chao Hu 6 , Lu Qin 2 , Zhengzijin Yang 2 , Huan Xiong 2 , Yi Lyu 2 , Yue Zhou 2 , Aijing Luo 1, 4, 5
Affiliation  

Background. Artificial intelligence (AI) has developed rapidly, and its application extends to clinical decision support system (CDSS) for improving healthcare quality. However, the interpretability of AI-driven CDSS poses significant challenges to widespread application. Objective. This study is a review of the knowledge-based and data-based CDSS literature regarding interpretability in health care. It highlights the relevance of interpretability for CDSS and the area for improvement from technological and medical perspectives. Methods. A systematic search was conducted on the interpretability-related literature published from 2011 to 2020 and indexed in the five databases: Web of Science, PubMed, ScienceDirect, Cochrane, and Scopus. Journal articles that focus on the interpretability of CDSS were included for analysis. Experienced researchers also participated in manually reviewing the selected articles for inclusion/exclusion and categorization. Results. Based on the inclusion and exclusion criteria, 20 articles from 16 journals were finally selected for this review. Interpretability, which means a transparent structure of the model, a clear relationship between input and output, and explainability of artificial intelligence algorithms, is essential for CDSS application in the healthcare setting. Methods for improving the interpretability of CDSS include ante-hoc methods such as fuzzy logic, decision rules, logistic regression, decision trees for knowledge-based AI, and white box models, post hoc methods such as feature importance, sensitivity analysis, visualization, and activation maximization for black box models. A number of factors, such as data type, biomarkers, human-AI interaction, needs of clinicians, and patients, can affect the interpretability of CDSS. Conclusions. The review explores the meaning of the interpretability of CDSS and summarizes the current methods for improving interpretability from technological and medical perspectives. The results contribute to the understanding of the interpretability of CDSS based on AI in health care. Future studies should focus on establishing formalism for defining interpretability, identifying the properties of interpretability, and developing an appropriate and objective metric for interpretability; in addition, the user's demand for interpretability and how to express and provide explanations are also the directions for future research.

中文翻译:


从技术和医学角度看基于人工智能的临床决策支持系统的可解释性:系统评价



背景。人工智能(AI)发展迅速,其应用扩展到临床决策支持系统(CDSS)以提高医疗质量。然而,人工智能驱动的 CDSS 的可解释性对广泛应用提出了重大挑战。客观的。本研究回顾了有关医疗保健可解释性的基于知识和数据的 CDSS 文献。它强调了 CDSS 的可解释性的相关性以及从技术和医学角度来看需要改进的领域。方法。对 2011 年至 2020 年发表的可解释性相关文献进行了系统检索,并在五个数据库中建立了索引:Web of Science、PubMed、ScienceDirect、Cochrane 和 Scopus。重点关注 CDSS 可解释性的期刊文章被纳入分析。经验丰富的研究人员还参与手动审查所选文章的纳入/排除和分类。结果。根据纳入和排除标准,最终选取16种期刊的20篇文章进行本次综述。可解释性,意味着模型的透明结构、输入和输出之间清晰的关系以及人工智能算法的可解释性,对于 CDSS 在医疗保健环境中的应用至关重要。提高 CDSS 可解释性的方法包括事前方法,如模糊逻辑、决策规则、逻辑回归、基于知识的人工智能决策树和白盒模型,事后方法,如特征重要性、敏感性分析、可视化和黑盒模型的激活最大化。 许多因素,例如数据类型、生物标志物、人机交互、临床医生和患者的需求,都会影响 CDSS 的可解释性。结论。该综述探讨了CDSS可解释性的意义,并从技术和医学角度总结了当前提高可解释性的方法。结果有助于理解基于人工智能的 CDSS 在医疗保健领域的可解释性。未来的研究应侧重于建立定义可解释性的形式主义,识别可解释性的属性,并制定适当且客观的可解释性度量标准;此外,用户对可解释性的需求以及如何表达和提供解释也是未来研究的方向。
更新日期:2023-02-03
down
wechat
bug