当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep unfolding network with spatial alignment for multi-modal MRI reconstruction
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-08-31 , DOI: 10.1016/j.media.2024.103331
Hao Zhang 1 , Qi Wang 1 , Jun Shi 2 , Shihui Ying 3 , Zhijie Wen 1
Affiliation  

Multi-modal Magnetic Resonance Imaging (MRI) offers complementary diagnostic information, but some modalities are limited by the long scanning time. To accelerate the whole acquisition process, MRI reconstruction of one modality from highly under-sampled k-space data with another fully-sampled reference modality is an efficient solution. However, the misalignment between modalities, which is common in clinic practice, can negatively affect reconstruction quality. Existing deep learning-based methods that account for inter-modality misalignment perform better, but still share two main common limitations: (1) The spatial alignment task is not adaptively integrated with the reconstruction process, resulting in insufficient complementarity between the two tasks; (2) the entire framework has weak interpretability. In this paper, we construct a novel Deep Unfolding Network with Spatial Alignment, termed DUN-SA, to appropriately embed the spatial alignment task into the reconstruction process. Concretely, we derive a novel joint alignment-reconstruction model with a specially designed aligned cross-modal prior term. By relaxing the model into cross-modal spatial alignment and multi-modal reconstruction tasks, we propose an effective algorithm to solve this model alternatively. Then, we unfold the iterative stages of the proposed algorithm and design corresponding network modules to build DUN-SA with interpretability. Through end-to-end training, we effectively compensate for spatial misalignment using only reconstruction loss, and utilize the progressively aligned reference modality to provide inter-modality prior to improve the reconstruction of the target modality. Comprehensive experiments on four real datasets demonstrate that our method exhibits superior reconstruction performance compared to state-of-the-art methods.

中文翻译:


用于多模态 MRI 重建的具有空间对齐的深度展开网络



多模态磁共振成像 (MRI) 提供补充诊断信息,但某些模式受到扫描时间长的限制。为了加速整个采集过程,从高度欠采样的 k 空间数据中使用另一种完全采样的参考模态对一种模态进行 MRI 重建是一种有效的解决方案。然而,临床实践中常见的方式之间的不一致会对重建质量产生负面影响。现有的基于深度学习的方法在解释模态间错位方面表现较好,但仍然存在两个主要的共同局限性:(1)空间对齐任务没有与重建过程自适应地集成,导致两个任务之间的互补性不足; (2)整个框架的可解释性较弱。在本文中,我们构建了一种新颖的具有空间对齐的深度展开网络,称为 DUN-SA,以将空间对齐任务适当地嵌入到重建过程中。具体来说,我们推导了一种新颖的联合对齐重建模型,该模型具有专门设计的对齐跨模态先验项。通过将模型放松为跨模态空间对齐和多模态重建任务,我们提出了一种有效的算法来交替求解该模型。然后,我们展开所提出算法的迭代阶段并设计相应的网络模块来构建具有可解释性的DUN-SA。通过端到端训练,我们仅使用重建损失来有效补偿空间失准,并利用逐步对齐的参考模态来提供模间间的先验,以改善目标模态的重建。 对四个真实数据集的综合实验表明,与最先进的方法相比,我们的方法表现出卓越的重建性能。
更新日期:2024-08-31
down
wechat
bug