当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SFPL: Sample-specific fine-grained prototype learning for imbalanced medical image classification
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-07-25 , DOI: 10.1016/j.media.2024.103281
Yongbei Zhu 1 , Shuo Wang 1 , He Yu 2 , Weimin Li 2 , Jie Tian 1
Affiliation  

Imbalanced classification is a common and difficult task in many medical image analysis applications. However, most existing approaches focus on balancing feature distribution and classifier weights between classes, while ignoring the inner-class heterogeneity and the individuality of each sample. In this paper, we proposed a sample-specific fine-grained prototype learning (SFPL) method to learn the fine-grained representation of the majority class and learn a cosine classifier specifically for each sample such that the classification model is highly tuned to the individual’s characteristic. SFPL first builds multiple prototypes to represent the majority class, and then updates the prototypes through a mixture weighting strategy. Moreover, we proposed a uniform loss based on set representations to make the fine-grained prototypes distribute uniformly. To establish associations between fine-grained prototypes and cosine classifier, we propose a selective attention aggregation module to select the effective fine-grained prototypes for final classification. Extensive experiments on three different tasks demonstrate that SFPL outperforms the state-of-the-art (SOTA) methods. Importantly, as the imbalance ratio increases from 10 to 100, the improvement of SFPL over SOTA methods increases from 2.2% to 2.4%; as the training data decreases from 800 to 100, the improvement of SFPL over SOTA methods increases from 2.2% to 3.8%.

中文翻译:


SFPL:用于不平衡医学图像分类的特定于样本的细粒度原型学习



在许多医学图像分析应用中,不平衡分类是一项常见且困难的任务。然而,大多数现有方法侧重于平衡类之间的特征分布和分类器权重,而忽略了类内的异质性和每个样本的个体性。在本文中,我们提出了一种特定于样本的细粒度原型学习(SFPL)方法来学习多数类的细粒度表示,并专门针对每个样本学习余弦分类器,​​以便分类模型高度适应个体的特征。 SFPL 首先构建多个原型来代表多数类,然后通过混合加权策略更新原型。此外,我们提出了基于集合表示的均匀损失,以使细粒度原型均匀分布。为了建立细粒度原型和余弦分类器之间的关联,我们提出了一个选择性注意聚合模块来选择有效的细粒度原型进行最终分类。对三个不同任务的广泛实验表明 SFPL 优于最先进的 (SOTA) 方法。重要的是,随着不平衡率从 10 增加到 100,SFPL 相对于 SOTA 方法的改进从 2.2% 增加到 2.4%;随着训练数据从 800 个减少到 100 个,SFPL 相对于 SOTA 方法的改进从 2.2% 增加到 3.8%。
更新日期:2024-07-25
down
wechat
bug