当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Large-scale multi-center CT and MRI segmentation of pancreas with deep learning
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-11-08 , DOI: 10.1016/j.media.2024.103382
Zheyuan Zhang, Elif Keles, Gorkem Durak, Yavuz Taktak, Onkar Susladkar, Vandan Gorade, Debesh Jha, Asli C. Ormeci, Alpay Medetalibeyoglu, Lanhong Yao, Bin Wang, Ilkin Sevgi Isler, Linkai Peng, Hongyi Pan, Camila Lopes Vendrami, Amir Bourhani, Yury Velichko, Boqing Gong, Concetto Spampinato, Ayis Pyrros, Pallavi Tiwari, Derk C.F. Klatte, Megan Engels, Sanne Hoogenboom, Candice W. Bolan, Emil Agarunov, Nassier Harfouch, Chenchan Huang, Marco J. Bruno, Ivo Schoots, Rajesh N. Keswani, Frank H. Miller, Tamas Gonda, Cemal Yazici, Temel Tirkes, Baris Turkbey, Michael B. Wallace, Ulas Bagci

Automated volumetric segmentation of the pancreas on cross-sectional imaging is needed for diagnosis and follow-up of pancreatic diseases. While CT-based pancreatic segmentation is more established, MRI-based segmentation methods are understudied, largely due to a lack of publicly available datasets, benchmarking research efforts, and domain-specific deep learning methods. In this retrospective study, we collected a large dataset (767 scans from 499 participants) of T1-weighted (T1 W) and T2-weighted (T2 W) abdominal MRI series from five centers between March 2004 and November 2022. We also collected CT scans of 1,350 patients from publicly available sources for benchmarking purposes. We introduced a new pancreas segmentation method, called PanSegNet, combining the strengths of nnUNet and a Transformer network with a new linear attention module enabling volumetric computation. We tested PanSegNet’s accuracy in cross-modality (a total of 2,117 scans) and cross-center settings with Dice and Hausdorff distance (HD95) evaluation metrics. We used Cohen’s kappa statistics for intra and inter-rater agreement evaluation and paired t-tests for volume and Dice comparisons, respectively. For segmentation accuracy, we achieved Dice coefficients of 88.3% (±7.2%, at case level) with CT, 85.0% (±7.9%) with T1 W MRI, and 86.3% (±6.4%) with T2 W MRI. There was a high correlation for pancreas volume prediction with R2 of 0.91, 0.84, and 0.85 for CT, T1 W, and T2 W, respectively. We found moderate inter-observer (0.624 and 0.638 for T1 W and T2 W MRI, respectively) and high intra-observer agreement scores. All MRI data is made available at https://osf.io/kysnj/. Our source code is available at https://github.com/NUBagciLab/PaNSegNet.

中文翻译:


基于深度学习的胰腺大规模多中心 CT 和 MRI 分割



需要在横断面成像上对胰腺进行自动体积分割,以诊断和随访胰腺疾病。虽然基于 CT 的胰腺分割更为成熟,但基于 MRI 的分割方法研究不足,这主要是由于缺乏公开可用的数据集、基准测试研究工作和特定领域的深度学习方法。在这项回顾性研究中,我们收集了 2004 年 3 月至 2022 年 11 月期间来自五个中心的 T1 加权 (T1 W) 和 T2 加权 (T2 W) 腹部 MRI 系列的大型数据集 (来自 499 名参与者的 767 次扫描)。我们还从公开可用的来源收集了 1,350 名患者的 CT 扫描,用于基准测试目的。我们引入了一种新的胰腺分割方法,称为 PanSegNet,它将 nnUNet 和 Transformer 网络的优势与支持体积计算的新线性注意力模块相结合。我们使用 Dice 和 Hausdorff 距离 (HD95) 评估指标测试了 PanSegNet 在跨模态(总共 2,117 次扫描)和跨中心设置方面的准确性。我们使用 Cohen 的 kappa 统计量进行评分者内部和评分者间一致性评估,并使用配对 t 检验分别进行数量和 Dice 比较。对于分割准确性,我们实现了 CT 的 Dice 系数为 88.3% (±7.2%,病例水平),T1 W MRI 为 85.0% (±7.9%),T2 W MRI 为 86.3% (±6.4%)。胰腺体积预测具有高度相关性,CT 、 T1 W 和 T2 W 的 R2 分别为 0.91 、 0.84 和 0.85。我们发现观察者间中等 (T1 W 和 T2 W MRI 分别为 0.624 和 0.638) 和高观察者内部一致性评分。所有 MRI 数据均在 https://osf.io/kysnj/ 上提供。我们的源代码可在 https://github.com/NUBagciLab/PaNSegNet 上获得。
更新日期:2024-11-08
down
wechat
bug