当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cross-view discrepancy-dependency network for volumetric medical image segmentation
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-08-30 , DOI: 10.1016/j.media.2024.103329
Shengzhou Zhong 1 , Wenxu Wang 1 , Qianjin Feng 1 , Yu Zhang 1 , Zhenyuan Ning 1
Affiliation  

The limited data poses a crucial challenge for deep learning-based volumetric medical image segmentation, and many methods have tried to represent the volume by its subvolumes (, multi-view slices) for alleviating this issue. However, such methods generally sacrifice inter-slice spatial continuity. Currently, a promising avenue involves incorporating multi-view information into the network to enhance volume representation learning, but most existing studies tend to overlook the discrepancy and dependency across different views, ultimately limiting the potential of multi-view representations. To this end, we propose a cross-view discrepancy-dependency network (CvDd-Net) to task with volumetric medical image segmentation, which exploits multi-view slice prior to assist volume representation learning and explore view discrepancy and view dependency for performance improvement. Specifically, we develop a discrepancy-aware morphology reinforcement (DaMR) module to effectively learn view-specific representation by mining morphological information (, boundary and position of object). Besides, we design a dependency-aware information aggregation (DaIA) module to adequately harness the multi-view slice prior, enhancing individual view representations of the volume and integrating them based on cross-view dependency. Extensive experiments on four medical image datasets (, Thyroid, Cervix, Pancreas, and Glioma) demonstrate the efficacy of the proposed method on both fully-supervised and semi-supervised tasks.

中文翻译:


用于体积医学图像分割的跨视图差异依赖性网络



有限的数据对基于深度学习的体积医学图像分割提出了严峻的挑战,许多方法试图通过子体积(多视图切片)来表示体积来缓解这个问题。然而,此类方法通常会牺牲切片间的空间连续性。目前,一个有前途的途径是将多视图信息合并到网络中以增强体表示学习,但大多数现有研究往往忽视不同视图之间的差异和依赖性,最终限制了多视图表示的潜力。为此,我们提出了一种跨视图差异依赖性网络(CvDd-Net)来执行体积医学图像分割任务,该网络在协助体积表示学习之前利用多视图切片并探索视图差异和视图依赖性以提高性能。具体来说,我们开发了一个差异感知形态强化(DaMR)模块,通过挖掘形态信息(对象的边界和位置)来有效地学习特定于视图的表示。此外,我们设计了一个依赖感知信息聚合(DaIA)模块来充分利用多视图切片先验,增强体积的各个视图表示并基于跨视图依赖将它们集成。对四个医学图像数据集(甲状腺、子宫颈、胰腺和胶质瘤)的广泛实验证明了所提出的方法在全监督和半监督任务上的有效性。
更新日期:2024-08-30
down
wechat
bug