当前位置: X-MOL 学术EJNMMI Phys. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-modal co-learning with attention mechanism for head and neck tumor segmentation on 18FDG PET-CT
EJNMMI Physics ( IF 3.0 ) Pub Date : 2024-07-25 , DOI: 10.1186/s40658-024-00670-y
Min Jeong Cho 1, 2, 3 , Donghwi Hwang 2, 4 , Si Young Yie 1, 2, 3 , Jae Sung Lee 1, 2, 3, 4, 5
Affiliation  

Effective radiation therapy requires accurate segmentation of head and neck cancer, one of the most common types of cancer. With the advancement of deep learning, people have come up with various methods that use positron emission tomography-computed tomography to get complementary information. However, these approaches are computationally expensive because of the separation of feature extraction and fusion functions and do not make use of the high sensitivity of PET. We propose a new deep learning-based approach to alleviate these challenges. We proposed a tumor region attention module that fully exploits the high sensitivity of PET and designed a network that learns the correlation between the PET and CT features using squeeze-and-excitation normalization (SE Norm) without separating the feature extraction and fusion functions. In addition, we introduce multi-scale context fusion, which exploits contextual information from different scales. The HECKTOR challenge 2021 dataset was used for training and testing. The proposed model outperformed the state-of-the-art models for medical image segmentation; in particular, the dice similarity coefficient increased by 8.78% compared to U-net. The proposed network segmented the complex shape of the tumor better than the state-of-the-art medical image segmentation methods, accurately distinguishing between tumor and non-tumor regions.

中文翻译:


具有注意力机制的多模态协同学习在 18FDG PET-CT 上进行头颈肿瘤分割



有效的放射治疗需要准确分割头颈癌,这是最常见的癌症类型之一。随着深度学习的进步,人们提出了各种利用正电子发射断层扫描-计算机断层扫描来获取补充信息的方法。然而,由于特征提取和融合功能的分离,这些方法的计算成本很高,并且没有利用 PET 的高灵敏度。我们提出了一种新的基于深度学习的方法来缓解这些挑战。我们提出了一种肿瘤区域注意力模块,充分利用 PET 的高灵敏度,并设计了一个网络,使用挤压和激励归一化(SE Norm)学习 PET 和 CT 特征之间的相关性,而无需分离特征提取和融合功能。此外,我们引入了多尺度上下文融合,它利用不同尺度的上下文信息。使用 HECKTOR Challenge 2021 数据集进行训练和测试。所提出的模型优于医学图像分割的最先进模型;尤其是dice相似系数相比U-net提高了8.78%。所提出的网络比最先进的医学图像分割方法更好地分割肿瘤的复杂形状,准确地区分肿瘤和非肿瘤区域。
更新日期:2024-07-25
down
wechat
bug