当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Semantics and instance interactive learning for labeling and segmentation of vertebrae in CT images
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-11-01 , DOI: 10.1016/j.media.2024.103380
Yixiao Mao, Qianjin Feng, Yu Zhang, Zhenyuan Ning

Automatically labeling and segmenting vertebrae in 3D CT images compose a complex multi-task problem. Current methods progressively conduct vertebra labeling and semantic segmentation, which typically include two separate models and may ignore feature interaction among different tasks. Although instance segmentation approaches with multi-channel prediction have been proposed to alleviate such issues, their utilization of semantic information remains insufficient. Additionally, another challenge for an accurate model is how to effectively distinguish similar adjacent vertebrae and model their sequential attribute. In this paper, we propose a Semantics and Instance Interactive Learning (SIIL) paradigm for synchronous labeling and segmentation of vertebrae in CT images. SIIL models semantic feature learning and instance feature learning, in which the former extracts spinal semantics and the latter distinguishes vertebral instances. Interactive learning involves semantic features to improve the separability of vertebral instances and instance features to help learn position and contour information, during which a Morphological Instance Localization Learning (MILL) module is introduced to align semantic and instance features and facilitate their interaction. Furthermore, an Ordinal Contrastive Prototype Learning (OCPL) module is devised to differentiate adjacent vertebrae with high similarity (via cross-image contrastive learning), and simultaneously model their sequential attribute (via a temporal unit). Extensive experiments on several datasets demonstrate that our method significantly outperforms other approaches in labeling and segmenting vertebrae. Our code is available at https://github.com/YuZhang-SMU/Vertebrae-Labeling-Segmentation

中文翻译:


用于 CT 图像中椎骨标记和分割的语义和实例交互式学习



在 3D CT 图像中自动标记和分割椎骨构成了一个复杂的多任务问题。目前的方法逐步进行椎骨标记和语义分割,通常包括两个独立的模型,可能会忽略不同任务之间的特征交互。尽管已经提出了具有多通道预测的实例分割方法来缓解此类问题,但它们对语义信息的利用仍然不足。此外,准确模型的另一个挑战是如何有效地区分相似的相邻椎骨并对其顺序属性进行建模。在本文中,我们提出了一种语义和实例交互式学习 (SIIL) 范式,用于 CT 图像中椎骨的同步标记和分割。SIIL 模型语义特征学习和实例特征学习,其中前者提取脊柱语义,后者区分椎体实例。交互式学习涉及语义特征以提高椎体实例的可分离性,以及实例特征以帮助学习位置和轮廓信息,在此期间引入了形态学实例定位学习 (MILL) 模块来对齐语义和实例特征并促进它们的交互。此外,设计了一个序数对比原型学习 (OCPL) 模块来区分具有高相似性的相邻椎骨(通过跨图像对比学习),并同时建模它们的顺序属性(通过时间单元)。对几个数据集的广泛实验表明,我们的方法在标记和分割椎骨方面明显优于其他方法。我们的代码可在 https://github.com/YuZhang-SMU/Vertebrae-Labeling-Segmentation
更新日期:2024-11-01
down
wechat
bug