当前位置:
X-MOL 学术
›
Med. Image Anal.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Cross-view discrepancy-dependency network for volumetric medical image segmentation
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-08-30 , DOI: 10.1016/j.media.2024.103329 Shengzhou Zhong 1 , Wenxu Wang 1 , Qianjin Feng 1 , Yu Zhang 1 , Zhenyuan Ning 1
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-08-30 , DOI: 10.1016/j.media.2024.103329 Shengzhou Zhong 1 , Wenxu Wang 1 , Qianjin Feng 1 , Yu Zhang 1 , Zhenyuan Ning 1
Affiliation
The limited data poses a crucial challenge for deep learning-based volumetric medical image segmentation, and many methods have tried to represent the volume by its subvolumes (i.e. , multi-view slices) for alleviating this issue. However, such methods generally sacrifice inter-slice spatial continuity. Currently, a promising avenue involves incorporating multi-view information into the network to enhance volume representation learning, but most existing studies tend to overlook the discrepancy and dependency across different views, ultimately limiting the potential of multi-view representations. To this end, we propose a cross-view discrepancy-dependency network (CvDd-Net) to task with volumetric medical image segmentation, which exploits multi-view slice prior to assist volume representation learning and explore view discrepancy and view dependency for performance improvement. Specifically, we develop a discrepancy-aware morphology reinforcement (DaMR) module to effectively learn view-specific representation by mining morphological information (i.e. , boundary and position of object). Besides, we design a dependency-aware information aggregation (DaIA) module to adequately harness the multi-view slice prior, enhancing individual view representations of the volume and integrating them based on cross-view dependency. Extensive experiments on four medical image datasets (i.e. , Thyroid, Cervix, Pancreas, and Glioma) demonstrate the efficacy of the proposed method on both fully-supervised and semi-supervised tasks.
中文翻译:
用于体积医学图像分割的跨视图差异依赖网络
有限的数据对基于深度学习的体积医学图像分割构成了关键挑战,许多方法都试图通过其子体积(即多视图切片)来表示体积,以缓解这个问题。但是,此类方法通常会牺牲切片间的空间连续性。目前,一个有前途的途径是将多视图信息整合到网络中以增强体积表示学习,但大多数现有研究往往忽视了不同视图之间的差异和依赖性,最终限制了多视图表示的潜力。为此,我们提出了一个跨视图差异依赖网络 (CvDd-Net) 来处理体积医学图像分割,它利用多视图切片来辅助体积表示学习并探索视图差异和视图依赖性以提高性能。具体来说,我们开发了一个差异感知形态强化 (DaMR) 模块,通过挖掘形态信息(即对象的边界和位置)来有效地学习特定于视图的表示。此外,我们设计了一个依赖感知信息聚合 (DaIA) 模块来充分利用多视图切片先验,增强体积的单个视图表示,并根据交叉视图依赖性集成它们。对四个医学图像数据集(即甲状腺、宫颈、胰腺和神经胶质瘤)的广泛实验证明了所提出的方法在完全监督和半监督任务中的有效性。
更新日期:2024-08-30
中文翻译:
用于体积医学图像分割的跨视图差异依赖网络
有限的数据对基于深度学习的体积医学图像分割构成了关键挑战,许多方法都试图通过其子体积(即多视图切片)来表示体积,以缓解这个问题。但是,此类方法通常会牺牲切片间的空间连续性。目前,一个有前途的途径是将多视图信息整合到网络中以增强体积表示学习,但大多数现有研究往往忽视了不同视图之间的差异和依赖性,最终限制了多视图表示的潜力。为此,我们提出了一个跨视图差异依赖网络 (CvDd-Net) 来处理体积医学图像分割,它利用多视图切片来辅助体积表示学习并探索视图差异和视图依赖性以提高性能。具体来说,我们开发了一个差异感知形态强化 (DaMR) 模块,通过挖掘形态信息(即对象的边界和位置)来有效地学习特定于视图的表示。此外,我们设计了一个依赖感知信息聚合 (DaIA) 模块来充分利用多视图切片先验,增强体积的单个视图表示,并根据交叉视图依赖性集成它们。对四个医学图像数据集(即甲状腺、宫颈、胰腺和神经胶质瘤)的广泛实验证明了所提出的方法在完全监督和半监督任务中的有效性。