当前位置:
X-MOL 学术
›
Inform. Fusion
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Conti-Fuse: A novel continuous decomposition-based fusion framework for infrared and visible images
Information Fusion ( IF 14.7 ) Pub Date : 2024-12-02 , DOI: 10.1016/j.inffus.2024.102839 Hui Li, Haolong Ma, Chunyang Cheng, Zhongwei Shen, Xiaoning Song, Xiao-Jun Wu
Information Fusion ( IF 14.7 ) Pub Date : 2024-12-02 , DOI: 10.1016/j.inffus.2024.102839 Hui Li, Haolong Ma, Chunyang Cheng, Zhongwei Shen, Xiaoning Song, Xiao-Jun Wu
For better explore the relations of inter-modal and inner-modal, even in deep learning fusion framework, the concept of decomposition plays a crucial role. However, the previous decomposition strategies (base & detail or low-frequency & high-frequency) are too rough to present the common features and the unique features of source modalities, which leads to a decline in the quality of the fused images. The existing strategies treat these relations as a binary system, which may not be suitable for the complex generation task (e.g. image fusion). To address this issue, a continuous decomposition-based fusion framework (Conti-Fuse) is proposed. Conti-Fuse treats the decomposition results as few samples along the feature variation trajectory of the source images, extending this concept to a more general state to achieve continuous decomposition. This novel continuous decomposition strategy enhances the representation of complementary information of inter-modal by increasing the number of decomposition samples, thus reducing the loss of critical information. To facilitate this process, the continuous decomposition module (CDM) is introduced to decompose the input into a series continuous components. The core module of CDM, State Transformer (ST), is utilized to efficiently capture the complementary information from source modalities. Furthermore, a novel decomposition loss function is also designed which ensures the smooth progression of the decomposition process while maintaining linear growth in time complexity with respect to the number of decomposition samples. Extensive experiments demonstrate that our proposed Conti-Fuse achieves superior performance compared to the state-of-the-art fusion methods.
中文翻译:
Conti-Fuse:一种用于红外和可见光图像的新型基于连续分解的融合框架
为了更好地探索模态间和内模态的关系,即使在深度学习融合框架中,分解的概念也起着至关重要的作用。然而,以前的分解策略(基础和细节或低频和高频)过于粗糙,无法呈现源模态的共同特征和独特特征,这导致了融合图像的质量下降。现有的策略将这些关系视为一个二进制系统,这可能不适合复杂的生成任务(例如图像融合)。针对这一问题,提出了一种基于连续分解的融合框架(Conti-Fuse)。Conti-Fuse 将分解结果视为沿源图像特征变化轨迹的少量样本,并将此概念扩展到更通用的状态,以实现连续分解。这种新颖的连续分解策略通过增加分解样本的数量来增强模态间互补信息的表示,从而减少关键信息的损失。为了促进这一过程,引入了连续分解模块 (CDM) 将输入分解为一系列连续分量。CDM 的核心模块 State Transformer (ST) 用于有效地捕获来自源模态的互补信息。此外,还设计了一种新的分解损失函数,该函数保证了分解过程的顺利进行,同时保持了时间复杂度相对于分解样本数量的线性增长。大量实验表明,与最先进的熔融方法相比,我们提出的 Conti-Fuse 实现了卓越的性能。
更新日期:2024-12-02
中文翻译:
Conti-Fuse:一种用于红外和可见光图像的新型基于连续分解的融合框架
为了更好地探索模态间和内模态的关系,即使在深度学习融合框架中,分解的概念也起着至关重要的作用。然而,以前的分解策略(基础和细节或低频和高频)过于粗糙,无法呈现源模态的共同特征和独特特征,这导致了融合图像的质量下降。现有的策略将这些关系视为一个二进制系统,这可能不适合复杂的生成任务(例如图像融合)。针对这一问题,提出了一种基于连续分解的融合框架(Conti-Fuse)。Conti-Fuse 将分解结果视为沿源图像特征变化轨迹的少量样本,并将此概念扩展到更通用的状态,以实现连续分解。这种新颖的连续分解策略通过增加分解样本的数量来增强模态间互补信息的表示,从而减少关键信息的损失。为了促进这一过程,引入了连续分解模块 (CDM) 将输入分解为一系列连续分量。CDM 的核心模块 State Transformer (ST) 用于有效地捕获来自源模态的互补信息。此外,还设计了一种新的分解损失函数,该函数保证了分解过程的顺利进行,同时保持了时间复杂度相对于分解样本数量的线性增长。大量实验表明,与最先进的熔融方法相比,我们提出的 Conti-Fuse 实现了卓越的性能。