当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Achieve fairness without demographics for dermatological disease diagnosis
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-05-03 , DOI: 10.1016/j.media.2024.103188
Ching-Hao Chiu , Yu-Jen Chen , Yawen Wu , Yiyu Shi , Tsung-Yi Ho

In medical image diagnosis, fairness has become increasingly crucial. Without bias mitigation, deploying unfair AI would harm the interests of the underprivileged population and potentially tear society apart. Recent research addresses prediction biases in deep learning models concerning demographic groups (e.g., gender, age, and race) by utilizing demographic (sensitive attribute) information during training. However, many sensitive attributes naturally exist in dermatological disease images. If the trained model only targets fairness for a specific attribute, it remains unfair for other attributes. Moreover, training a model that can accommodate multiple sensitive attributes is impractical due to privacy concerns. To overcome this, we propose a method enabling fair predictions for sensitive attributes during the testing phase without using such information during training. Inspired by prior work highlighting the impact of feature entanglement on fairness, we enhance the model features by capturing the features related to the sensitive and target attributes and regularizing the feature entanglement between corresponding classes. This ensures that the model can only classify based on the features related to the target attribute without relying on features associated with sensitive attributes, thereby improving fairness and accuracy. Additionally, we use disease masks from the Segment Anything Model (SAM) to enhance the quality of the learned feature. Experimental results demonstrate that the proposed method can improve fairness in classification compared to state-of-the-art methods in two dermatological disease datasets.

中文翻译:


无需人口统计学即可实现皮肤病诊断的公平性



在医学图像诊断中,公平性变得越来越重要。如果不减少偏见,部署不公平的人工智能将损害弱势群体的利益,并可能导致社会分裂。最近的研究通过在训练期间利用人口统计(敏感属性)信息来解决深度学习模型中有关人口统计群体(例如性别、年龄和种族)的预测偏差。然而,皮肤病图像中自然存在许多敏感属性。如果训练后的模型仅针对特定属性的公平性,那么对于其他属性来说仍然不公平。此外,由于隐私问题,训练可以容纳多个敏感属性的模型是不切实际的。为了克服这个问题,我们提出了一种方法,可以在测试阶段对敏感属性进行公平预测,而无需在训练期间使用此类信息。受到先前强调特征纠缠对公平性影响的工作的启发,我们通过捕获与敏感属性和目标属性相关的特征并规范相应类之间的特征纠缠来增强模型特征。这保证了模型只能基于与目标属性相关的特征进行分类,而不依赖于与敏感属性相关的特征,从而提高公平性和准确性。此外,我们使用分段任意模型 (SAM) 中的疾病掩模来提高学习特征的质量。实验结果表明,与两个皮肤病数据集中最先进的方法相比,所提出的方法可以提高分类的公平性。
更新日期:2024-05-03
down
wechat
bug