International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2024-12-02 , DOI: 10.1007/s11263-024-02256-8 Haowen Bai, Zixiang Zhao, Jiangshe Zhang, Yichen Wu, Lilun Deng, Yukun Cui, Baisong Jiang, Shuang Xu
Image fusion aims to combine information from multiple source images into a single one with more comprehensive informational content. Deep learning-based image fusion algorithms face significant challenges, including the lack of a definitive ground truth and the corresponding distance measurement. Additionally, current manually defined loss functions limit the model’s flexibility and generalizability for various fusion tasks. To address these limitations, we propose ReFusion, a unified meta-learning based image fusion framework that dynamically optimizes the fusion loss for various tasks through source image reconstruction. Compared to existing methods, ReFusion employs a parameterized loss function, that allows the training framework to be dynamically adapted according to the specific fusion scenario and task. ReFusion consists of three key components: a fusion module, a source reconstruction module, and a loss proposal module. We employ a meta-learning strategy to train the loss proposal module using the reconstruction loss. This strategy forces the fused image to be more conducive to reconstruct source images, allowing the loss proposal module to generate a adaptive fusion loss that preserves the optimal information from the source images. The update of the fusion module relies on the learnable fusion loss proposed by the loss proposal module. The three modules update alternately, enhancing each other to optimize the fusion loss for different tasks and consistently achieve satisfactory results. Extensive experiments demonstrate that ReFusion is capable of adapting to various tasks, including infrared-visible, medical, multi-focus, and multi-exposure image fusion. The code is available at https://github.com/HaowenBai/ReFusion.
中文翻译:
ReFusion:通过元学习从具有可学习损失的重建中学习图像融合
Image fusion 旨在将来自多个源图像的信息合并为一个具有更全面信息内容的源图像。基于深度学习的图像融合算法面临重大挑战,包括缺乏明确的地面实况和相应的距离测量。此外,当前手动定义的损失函数限制了模型在各种融合任务中的灵活性和泛化性。为了解决这些限制,我们提出了 ReFusion,这是一个基于元学习的统一图像融合框架,它通过源图像重建动态优化各种任务的融合损失。与现有方法相比,ReFusion 采用参数化损失函数,允许根据特定的融合场景和任务动态调整训练框架。ReFusion 由三个关键组件组成:融合模块、源重构模块和损失建议模块。我们采用元学习策略来使用重建损失来训练损失提案模块。这种策略迫使融合图像更有利于重建源图像,允许损失建议模块生成自适应融合损失,从而保留源图像的最佳信息。fusion 模块的更新依赖于 loss proposal 模块提出的可学习的 fusion loss。这三个模块交替更新,相互增强,以优化不同任务的熔合损失,并始终如一地获得令人满意的结果。大量实验表明,ReFusion 能够适应各种任务,包括红外可见光、医疗、多焦点和多曝光图像融合。该代码可在 https://github.com/HaowenBai/ReFusion 获取。