Nature Reviews Clinical Oncology ( IF 81.1 ) Pub Date : 2024-06-07 , DOI: 10.1038/s41571-024-00909-8 Vidya Sankar Viswanathan 1 , Vani Parmar 2 , Anant Madabhushi 3, 4
Artificial intelligence (AI) stands at the threshold of revolutionizing clinical oncology, with considerable potential to improve early cancer detection and risk assessment, and to enable more accurate personalized treatment recommendations. However, a notable imbalance exists in the distribution of the benefits of AI, which disproportionately favour those living in specific geographical locations and in specific populations. In this Perspective, we discuss the need to foster the development of equitable AI tools that are both accurate in and accessible to a diverse range of patient populations, including those in low-income to middle-income countries. We also discuss some of the challenges and potential solutions in attaining equitable AI, including addressing the historically limited representation of diverse populations in existing clinical datasets and the use of inadequate clinical validation methods. Additionally, we focus on extant sources of inequity including the type of model approach (such as deep learning, and feature engineering-based methods), the implications of dataset curation strategies, the need for rigorous validation across a variety of populations and settings, and the risk of introducing contextual bias that comes with developing tools predominantly in high-income countries.
中文翻译:
在肿瘤学中实现公平的人工智能
人工智能 (AI) 正处于彻底改变临床肿瘤学的门槛上,在改善早期癌症检测和风险评估以及实现更准确的个性化治疗建议方面具有巨大潜力。然而,人工智能的好处分配存在明显的不平衡,这不成比例地有利于生活在特定地理位置和特定人群中的人们。在本期展望中,我们讨论了促进开发公平的 AI 工具的必要性,这些工具既准确又可供各种患者群体使用,包括低收入到中等收入国家的患者。我们还讨论了实现公平 AI 的一些挑战和潜在解决方案,包括解决现有临床数据集中不同人群的历史代表性有限以及使用不充分的临床验证方法。此外,我们关注现有的不平等来源,包括模型方法的类型(例如深度学习和基于特征工程的方法)、数据集管理策略的影响、在各种人群和环境中进行严格验证的必要性,以及主要在高收入国家开发工具所带来的引入上下文偏差的风险。