当前位置: X-MOL 学术Ann. Lab. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Laboratory Data as a Potential Source of Bias in Healthcare Artificial Intelligence and Machine Learning Models.
Annals of Laboratory Medicine ( IF 4.0 ) Pub Date : 2024-10-24 , DOI: 10.3343/alm.2024.0323
Hung S Luu

Artificial intelligence (AI) and machine learning (ML) are anticipated to transform the practice of medicine. As one of the largest sources of digital data in healthcare, laboratory results can strongly influence AI and ML algorithms that require large sets of healthcare data for training. Embedded bias introduced into AI and ML models not only has disastrous consequences for quality of care but also may perpetuate and exacerbate health disparities. The lack of test harmonization, which is defined as the ability to produce comparable results and the same interpretation irrespective of the method or instrument platform used to produce the result, may introduce aggregation bias into algorithms with potential adverse outcomes for patients. Limited interoperability of laboratory results at the technical, syntactic, semantic, and organizational levels is a source of embedded bias that limits the accuracy and generalizability of algorithmic models. Population-specific issues, such as inadequate representation in clinical trials and inaccurate race attribution, not only affect the interpretation of laboratory results but also may perpetuate erroneous conclusions based on AI and ML models in the healthcare literature.

中文翻译:


实验室数据是医疗保健人工智能和机器学习模型中的潜在偏差来源。



人工智能 (AI) 和机器学习 (ML) 有望改变医学实践。作为医疗保健领域最大的数字数据源之一,实验室结果会强烈影响需要大量医疗保健数据进行训练的 AI 和 ML 算法。引入 AI 和 ML 模型的嵌入式偏见不仅会对护理质量造成灾难性的后果,而且还可能延续和加剧健康差异。缺乏测试协调性,其定义为无论用于产生结果的方法或工具平台如何,都能产生可比较结果和相同解释的能力,可能会在算法中引入聚合偏倚,从而对患者产生潜在的不良后果。实验室结果在技术、句法、语义和组织层面的有限互操作性是嵌入式偏见的根源,限制了算法模型的准确性和可推广性。特定于人群的问题,例如临床试验中的代表性不足和种族归因不准确,不仅会影响对实验室结果的解释,还可能导致医疗保健文献中基于 AI 和 ML 模型的错误结论永久存在。
更新日期:2024-10-24
down
wechat
bug