当前位置:
X-MOL 学术
›
Inform. Fusion
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
DSEM-NeRF: Multimodal feature fusion and global–local attention for enhanced 3D scene reconstruction
Information Fusion ( IF 14.7 ) Pub Date : 2024-10-23 , DOI: 10.1016/j.inffus.2024.102752 Dong Liu, Zhiyong Wang, Peiyuan Chen
Information Fusion ( IF 14.7 ) Pub Date : 2024-10-23 , DOI: 10.1016/j.inffus.2024.102752 Dong Liu, Zhiyong Wang, Peiyuan Chen
3D scene understanding often faces the problems of insufficient detail capture and poor adaptability to multi-view changes. To this end, we proposed a NeRF-based 3D scene understanding model DSEM-NeRF, which effectively improves the reconstruction quality of complex scenes through multimodal feature fusion and global–local attention mechanism. DSEM-NeRF extracts multimodal features such as color, depth, and semantics from multi-view 2D images, and accurately captures key areas by dynamically adjusting the importance of features. Experimental results show that DSEM-NeRF outperforms many existing models on the LLFF and DTU datasets, with PSNR reaching 20.01, 23.56, and 24.58 respectively, and SSIM reaching 0.834. In particular, it shows strong robustness in complex scenes and multi-view changes, verifying the effectiveness and reliability of the model.
中文翻译:
DSEM-NeRF:用于增强 3D 场景重建的多模态特征融合和全局-局部关注
3D 场景理解经常面临细节捕捉不足、对多视角变化适应性差的问题。为此,我们提出了一种基于 NeRF 的 3D 场景理解模型 DSEM-NeRF,通过多模态特征融合和全局-局部注意力机制,有效提高了复杂场景的重建质量。DSEM-NeRF 从多视图 2D 图像中提取颜色、深度和语义等多模态特征,并通过动态调整特征的重要性来准确捕捉关键区域。实验结果表明,DSEM-NeRF 在 LLFF 和 DTU 数据集上的性能优于许多现有模型,PSNR 分别达到 20.01、23.56 和 24.58,SSIM 达到 0.834。特别是在复杂场景和多视角变化中表现出较强的鲁棒性,验证了模型的有效性和可靠性。
更新日期:2024-10-23
中文翻译:
DSEM-NeRF:用于增强 3D 场景重建的多模态特征融合和全局-局部关注
3D 场景理解经常面临细节捕捉不足、对多视角变化适应性差的问题。为此,我们提出了一种基于 NeRF 的 3D 场景理解模型 DSEM-NeRF,通过多模态特征融合和全局-局部注意力机制,有效提高了复杂场景的重建质量。DSEM-NeRF 从多视图 2D 图像中提取颜色、深度和语义等多模态特征,并通过动态调整特征的重要性来准确捕捉关键区域。实验结果表明,DSEM-NeRF 在 LLFF 和 DTU 数据集上的性能优于许多现有模型,PSNR 分别达到 20.01、23.56 和 24.58,SSIM 达到 0.834。特别是在复杂场景和多视角变化中表现出较强的鲁棒性,验证了模型的有效性和可靠性。