npj Digital Medicine ( IF 12.4 ) Pub Date : 2024-10-03 , DOI: 10.1038/s41746-024-01270-x Vijaytha Muralidharan, Boluwatife Adeleye Adewale, Caroline J. Huang, Mfon Thelma Nta, Peter Oluwaduyilemi Ademiju, Pirunthan Pathmarajah, Man Kien Hang, Oluwafolajimi Adesanya, Ridwanullah Olamide Abdullateef, Abdulhammed Opeyemi Babatunde, Abdulquddus Ajibade, Sonia Onyeka, Zhou Ran Cai, Roxana Daneshjou, Tobi Olatunji
Machine learning and artificial intelligence (AI/ML) models in healthcare may exacerbate health biases. Regulatory oversight is critical in evaluating the safety and effectiveness of AI/ML devices in clinical settings. We conducted a scoping review on the 692 FDA-approved AI/ML-enabled medical devices approved from 1995-2023 to examine transparency, safety reporting, and sociodemographic representation. Only 3.6% of approvals reported race/ethnicity, 99.1% provided no socioeconomic data. 81.6% did not report the age of study subjects. Only 46.1% provided comprehensive detailed results of performance studies; only 1.9% included a link to a scientific publication with safety and efficacy data. Only 9.0% contained a prospective study for post-market surveillance. Despite the growing number of market-approved medical devices, our data shows that FDA reporting data remains inconsistent. Demographic and socioeconomic characteristics are underreported, exacerbating the risk of algorithmic bias and health disparity.
中文翻译:
对 FDA 批准的人工智能医疗设备的报告差距进行范围审查
医疗保健领域的机器学习和人工智能 (AI/ML) 模型可能会加剧健康偏差。监管监督对于评估临床环境中人工智能/机器学习设备的安全性和有效性至关重要。我们对 1995 年至 2023 年期间批准的 692 种 FDA 批准的支持 AI/ML 的医疗设备进行了范围审查,以检查透明度、安全报告和社会人口统计代表性。只有 3.6% 的批准报告了种族/民族,99.1% 的批准没有提供社会经济数据。 81.6% 的人没有报告研究对象的年龄。只有 46.1% 提供了全面详细的绩效研究结果;只有 1.9% 包含带有安全性和有效性数据的科学出版物的链接。只有 9.0% 包含上市后监测的前瞻性研究。尽管市场批准的医疗器械数量不断增加,但我们的数据显示 FDA 报告数据仍然不一致。人口统计和社会经济特征被低估,加剧了算法偏差和健康差异的风险。