当前位置:
X-MOL 学术
›
IEEE Trans. Med. Imaging
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
I³Net: Inter-Intra-Slice Interpolation Network for Medical Slice Synthesis
IEEE Transactions on Medical Imaging ( IF 8.9 ) Pub Date : 2024-04-26 , DOI: 10.1109/tmi.2024.3394033 Haofei Song 1 , Xintian Mao 1 , Jing Yu 1 , Qingli Li 1 , Yan Wang 1
IEEE Transactions on Medical Imaging ( IF 8.9 ) Pub Date : 2024-04-26 , DOI: 10.1109/tmi.2024.3394033 Haofei Song 1 , Xintian Mao 1 , Jing Yu 1 , Qingli Li 1 , Yan Wang 1
Affiliation
Medical imaging is limited by acquisition time and scanning equipment. CT and MR volumes, reconstructed with thicker slices, are anisotropic with high in-plane resolution and low through-plane resolution. We reveal an intriguing phenomenon that due to the mentioned nature of data, performing slice-wise interpolation from the axial view can yield greater benefits than performing super-resolution from other views. Based on this observation, we propose an Inter-Intra-slice Interpolation Network (
$\text{I}^{{3}}$
Net), which fully explores information from high in-plane resolution and compensates for low through-plane resolution. The through-plane branch supplements the limited information contained in low through-plane resolution from high in-plane resolution and enables continual and diverse feature learning. In-plane branch transforms features to the frequency domain and enforces an equal learning opportunity for all frequency bands in a global context learning paradigm. We further propose a cross-view block to take advantage of the information from all three views online. Extensive experiments on two public datasets demonstrate the effectiveness of $\text{I}^{{3}}$
Net, and noticeably outperforms state-of-the-art super-resolution, video frame interpolation and slice interpolation methods by a large margin. We achieve 43.90dB in PSNR, with at least 1.14dB improvement under the upscale factor of $\times 2$ on MSD dataset with faster inference. Code is available at https://github.com/DeepMed-Lab-ECNU/Medical-Image-Reconstruction
.
中文翻译:
I³Net:用于医疗切片合成的切片内插值网络
医学成像受采集时间和扫描设备的限制。用较厚的切片重建的 CT 和 MR 体积是各向异性的,具有高面内分辨率和低平面内分辨率。我们揭示了一个有趣的现象,即由于上述数据的性质,从轴向视图执行切片插值比从其他视图执行超分辨率可以产生更大的好处。基于这一观察,我们提出了一个切片内插值网络 ( $\text{I}^{{3}}$ Net),它充分探索了来自高面内分辨率的信息,并补偿了低的平面内分辨率。平面内分支从高平面内分辨率补充了低平面内分辨率中包含的有限信息,并支持持续和多样化的特征学习。平面内分支将特征转换为频域,并在全局上下文学习范式中为所有频带实施平等的学习机会。我们进一步提出了一个交叉视图块,以利用来自所有三个在线视图的信息。在两个公共数据集上的广泛实验证明了 $\text{I}^{{3}}$ Net 的有效性,并且明显优于最先进的超分辨率视频帧插值和切片插值方法。我们在 PSNR 中实现了 43.90dB,在 MSD 数据集上 $\times 2$ 的放大因子下至少提高了 1.14dB,推理速度更快。代码可在 https://github.com/DeepMed-Lab-ECNU/Medical-Image-Reconstruction 获取。
更新日期:2024-04-26
中文翻译:
I³Net:用于医疗切片合成的切片内插值网络
医学成像受采集时间和扫描设备的限制。用较厚的切片重建的 CT 和 MR 体积是各向异性的,具有高面内分辨率和低平面内分辨率。我们揭示了一个有趣的现象,即由于上述数据的性质,从轴向视图执行切片插值比从其他视图执行超分辨率可以产生更大的好处。基于这一观察,我们提出了一个切片内插值网络 ( $\text{I}^{{3}}$ Net),它充分探索了来自高面内分辨率的信息,并补偿了低的平面内分辨率。平面内分支从高平面内分辨率补充了低平面内分辨率中包含的有限信息,并支持持续和多样化的特征学习。平面内分支将特征转换为频域,并在全局上下文学习范式中为所有频带实施平等的学习机会。我们进一步提出了一个交叉视图块,以利用来自所有三个在线视图的信息。在两个公共数据集上的广泛实验证明了 $\text{I}^{{3}}$ Net 的有效性,并且明显优于最先进的超分辨率视频帧插值和切片插值方法。我们在 PSNR 中实现了 43.90dB,在 MSD 数据集上 $\times 2$ 的放大因子下至少提高了 1.14dB,推理速度更快。代码可在 https://github.com/DeepMed-Lab-ECNU/Medical-Image-Reconstruction 获取。