当前位置: X-MOL 学术IEEE Trans. Med. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
DuDoCFNet: Dual-Domain Coarse-to-Fine Progressive Network for Simultaneous Denoising, Limited-View Reconstruction, and Attenuation Correction of Cardiac SPECT
IEEE Transactions on Medical Imaging ( IF 8.9 ) Pub Date : 2024-04-05 , DOI: 10.1109/tmi.2024.3385650
Xiongchao Chen 1 , Bo Zhou 1 , Xueqi Guo 1 , Huidong Xie 1 , Qiong Liu 1 , James S. Duncan 2 , Albert J. Sinusas 3 , Chi Liu 2
Affiliation  

Single-Photon Emission Computed Tomography (SPECT) is widely applied for the diagnosis of coronary artery diseases. Low-dose (LD) SPECT aims to minimize radiation exposure but leads to increased image noise. Limited-view (LV) SPECT, such as the latest GE MyoSPECT ES system, enables accelerated scanning and reduces hardware expenses but degrades reconstruction accuracy. Additionally, Computed Tomography (CT) is commonly used to derive attenuation maps ( $\mu $ -maps) for attenuation correction (AC) of cardiac SPECT, but it will introduce additional radiation exposure and SPECT-CT misalignments. Although various methods have been developed to solely focus on LD denoising, LV reconstruction, or CT-free AC in SPECT, the solution for simultaneously addressing these tasks remains challenging and under-explored. Furthermore, it is essential to explore the potential of fusing cross-domain and cross-modality information across these interrelated tasks to further enhance the accuracy of each task. Thus, we propose a Dual-Domain Coarse-to-Fine Progressive Network (DuDoCFNet), a multi-task learning method for simultaneous LD denoising, LV reconstruction, and CT-free $\mu $ -map generation of cardiac SPECT. Paired dual-domain networks in DuDoCFNet are cascaded using a multi-layer fusion mechanism for cross-domain and cross-modality feature fusion. Two-stage progressive learning strategies are applied in both projection and image domains to achieve coarse-to-fine estimations of SPECT projections and CT-derived $\mu $ -maps. Our experiments demonstrate DuDoCFNet’s superior accuracy in estimating projections, generating $\mu $ -maps, and AC reconstructions compared to existing single- or multi-task learning methods, under various iterations and LD levels. The source code of this work is available at https://github.com/XiongchaoChen/DuDoCFNet-MultiTask .

中文翻译:


DuDoCFNet:双域粗到细渐进网络,用于心脏 SPECT 的同时去噪、有限视图重建和衰减校正



单光子发射计算机断层扫描 (SPECT) 广泛用于冠状动脉疾病的诊断。低剂量 (LD) SPECT 旨在最大限度地减少辐射暴露,但会导致图像噪声增加。限视 (LV) SPECT,例如最新的 GE MyoSPECT ES 系统,可以加速扫描并降低硬件费用,但会降低重建精度。此外,计算机断层扫描 (CT) 通常用于推导出心脏 SPECT 衰减校正 (AC) 的衰减图 ( $\mu $ -maps),但它会引入额外的辐射暴露和 SPECT-CT 错位。尽管已经开发了各种方法,仅专注于 SPECT 中的 LD 去噪、LV 重建或无 CT AC,但同时解决这些任务的解决方案仍然具有挑战性且未得到充分探索。此外,必须探索在这些相互关联的任务中融合跨域和跨模态信息的潜力,以进一步提高每项任务的准确性。因此,我们提出了一种双域粗到细渐进网络 (DuDoCFNet),这是一种多任务学习方法,用于同时进行 LD 去噪、LV 重建和无 CT 的 $\mu $ -map 心脏 SPECT 生成。DuDoCFNet 中的配对双域网络使用多层融合机制进行级联,以实现跨域和跨模态特征融合。在投影和图像领域都应用了两阶段渐进式学习策略,以实现对 SPECT 投影和 CT 衍生的 $\mu $ -map 的粗略到精细估计。我们的实验表明,与现有的单任务或多任务学习方法相比,在各种迭代和 LD 水平下,DuDoCFNet 在估计投影、生成 $\mu $ 映射和 AC 重建方面具有卓越的准确性。 这项工作的源代码可在 https://github.com/XiongchaoChen/DuDoCFNet-MultiTask 上获得。
更新日期:2024-04-05
down
wechat
bug