当前位置:
X-MOL 学术
›
Med. Image Anal.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Diversity matters: Cross-head mutual mean-teaching for semi-supervised medical image segmentation
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-08-10 , DOI: 10.1016/j.media.2024.103302 Wei Li 1 , Ruifeng Bian 1 , Wenyi Zhao 1 , Weijin Xu 1 , Huihua Yang 1
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-08-10 , DOI: 10.1016/j.media.2024.103302 Wei Li 1 , Ruifeng Bian 1 , Wenyi Zhao 1 , Weijin Xu 1 , Huihua Yang 1
Affiliation
Semi-supervised medical image segmentation (SSMIS) has witnessed substantial advancements by leveraging limited labeled data and abundant unlabeled data. Nevertheless, existing state-of-the-art (SOTA) methods encounter challenges in accurately predicting labels for the unlabeled data, giving rise to disruptive noise during training and susceptibility to erroneous information overfitting. Moreover, applying perturbations to inaccurate predictions further impedes consistent learning. To address these concerns, we propose a novel c ross-head m utual m ean-t eaching network (CMMT-Net) incorporated weak-strong data augmentations, thereby benefiting both co-training and consistency learning. More concretely, our CMMT-Net extends the cross-head co-training paradigm by introducing two auxiliary mean teacher models, which yield more accurate predictions and provide supplementary supervision. The predictions derived from weakly augmented samples generated by one mean teacher are leveraged to guide the training of another student with strongly augmented samples. Furthermore, two distinct yet synergistic data perturbations at the pixel and region levels are introduced. We propose mutual virtual adversarial training (MVAT) to smooth the decision boundary and enhance feature representations, and a cross-set CutMix strategy to generate more diverse training samples for capturing inherent structural data information. Notably, CMMT-Net simultaneously implements data, feature, and network perturbations, amplifying model diversity and generalization performance. Experimental results on three publicly available datasets indicate that our approach yields remarkable improvements over previous SOTA methods across various semi-supervised scenarios. The code is available at https://github.com/Leesoon1984/CMMT-Net .
中文翻译:
多样性很重要:半监督医学图像分割的交叉头互均教学
通过利用有限的标记数据和大量的未标记数据,半监督医学图像分割(SSMIS)取得了巨大的进步。然而,现有的最先进(SOTA)方法在准确预测未标记数据的标签方面遇到了挑战,导致训练期间产生破坏性噪声,并且容易出现错误信息过度拟合。此外,将扰动应用于不准确的预测会进一步阻碍一致的学习。为了解决这些问题,我们提出了一种新颖的跨头相互均值教学网络(CMMT-Net),该网络结合了弱强数据增强,从而有利于协同训练和一致性学习。更具体地说,我们的 CMMT-Net 通过引入两个辅助均值教师模型来扩展跨头协同训练范式,从而产生更准确的预测并提供补充监督。利用一名平均教师生成的弱增强样本得出的预测来指导另一名具有强增强样本的学生的训练。此外,在像素和区域级别引入了两种不同但协同的数据扰动。我们提出相互虚拟对抗训练(MVAT)来平滑决策边界并增强特征表示,并提出跨集 CutMix 策略来生成更多样化的训练样本以捕获固有的结构数据信息。值得注意的是,CMMT-Net 同时实现数据、特征和网络扰动,放大模型多样性和泛化性能。三个公开数据集的实验结果表明,我们的方法在各种半监督场景中比以前的 SOTA 方法有了显着的改进。该代码可在 https://github 上获取。com/Leesoon1984/CMMT-Net。
更新日期:2024-08-10
中文翻译:
多样性很重要:半监督医学图像分割的交叉头互均教学
通过利用有限的标记数据和大量的未标记数据,半监督医学图像分割(SSMIS)取得了巨大的进步。然而,现有的最先进(SOTA)方法在准确预测未标记数据的标签方面遇到了挑战,导致训练期间产生破坏性噪声,并且容易出现错误信息过度拟合。此外,将扰动应用于不准确的预测会进一步阻碍一致的学习。为了解决这些问题,我们提出了一种新颖的跨头相互均值教学网络(CMMT-Net),该网络结合了弱强数据增强,从而有利于协同训练和一致性学习。更具体地说,我们的 CMMT-Net 通过引入两个辅助均值教师模型来扩展跨头协同训练范式,从而产生更准确的预测并提供补充监督。利用一名平均教师生成的弱增强样本得出的预测来指导另一名具有强增强样本的学生的训练。此外,在像素和区域级别引入了两种不同但协同的数据扰动。我们提出相互虚拟对抗训练(MVAT)来平滑决策边界并增强特征表示,并提出跨集 CutMix 策略来生成更多样化的训练样本以捕获固有的结构数据信息。值得注意的是,CMMT-Net 同时实现数据、特征和网络扰动,放大模型多样性和泛化性能。三个公开数据集的实验结果表明,我们的方法在各种半监督场景中比以前的 SOTA 方法有了显着的改进。该代码可在 https://github 上获取。com/Leesoon1984/CMMT-Net。