当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive Middle Modality Alignment Learning for Visible-Infrared Person Re-identification
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2024-11-09 , DOI: 10.1007/s11263-024-02276-4
Yukang Zhang, Yan Yan, Yang Lu, Hanzi Wang

Visible-infrared person re-identification (VIReID) has attracted increasing attention due to the requirements for 24-hour intelligent surveillance systems. In this task, one of the major challenges is the modality discrepancy between the visible (VIS) and infrared (NIR) images. Most conventional methods try to design complex networks or generative models to mitigate the cross-modality discrepancy while ignoring the fact that the modality gaps differ between the different VIS and NIR images. Different from existing methods, in this paper, we propose an Adaptive Middle-modality Alignment Learning (AMML) method, which can effectively reduce the modality discrepancy via an adaptive middle modality learning strategy at both image level and feature level. The proposed AMML method enjoys several merits. First, we propose an Adaptive Middle-modality Generator (AMG) module to reduce the modality discrepancy between the VIS and NIR images from the image level, which can effectively project the VIS and NIR images into a unified middle modality image (UMMI) space to adaptively generate middle-modality (M-modality) images. Second, we propose a feature-level Adaptive Distribution Alignment (ADA) loss to force the distribution of the VIS features and NIR features adaptively align with the distribution of M-modality features. Moreover, we also propose a novel Center-based Diverse Distribution Learning (CDDL) loss, which can effectively learn diverse cross-modality knowledge from different modalities while reducing the modality discrepancy between the VIS and NIR modalities. Extensive experiments on three challenging VIReID datasets show the superiority of the proposed AMML method over the other state-of-the-art methods. More remarkably, our method achieves 77.8% in terms of Rank-1 and 74.8% in terms of mAP on the SYSU-MM01 dataset for all search mode, and 86.6% in terms of Rank-1 and 88.3% in terms of mAP on the SYSU-MM01 dataset for indoor search mode. The code is released at: https://github.com/ZYK100/MMN.



中文翻译:


用于可见光-红外人员再识别的自适应中间模态对齐学习



由于对 24 小时智能监控系统的要求,可见光红外行人再识别 (VIReID) 越来越受到关注。在这项任务中,主要挑战之一是可见光 (VIS) 和红外 (NIR) 图像之间的模态差异。大多数传统方法试图设计复杂的网络或生成模型来减轻跨模态差异,同时忽略了不同 VIS 和 NIR 图像之间的模态差距不同的事实。与现有方法不同的是,在本文中,我们提出了一种自适应中间模态对齐学习 (AMML) 方法,该方法可以通过自适应中间模态学习策略在图像级别和特征级别有效减少模态差异。所提出的 AMML 方法具有几个优点。首先,我们提出了一个自适应中间模态生成器 (AMG) 模块,从图像层面减少 VIS 和 NIR 图像之间的模态差异,它可以有效地将 VIS 和 NIR 图像投影到统一的中间模态图像 (UMMI) 空间中,以自适应地生成中间模态 (M-modality) 图像。其次,我们提出了特征级的自适应分布对齐 (ADA) 损失,以强制 VIS 特征的分布和 NIR 特征与 M 模态特征的分布自适应对齐。此外,我们还提出了一种新的基于中心的多样化分布学习 (CDDL) 损失,它可以有效地从不同的模态中学习不同的跨模态知识,同时减少 VIS 和 NIR 模态之间的模态差异。对三个具有挑战性的 VIReID 数据集的广泛实验表明,所提出的 AMML 方法优于其他最先进的方法。更值得注意的是,我们的方法达到了 77。在所有搜索模式下,SYSU-MM01 数据集上 Rank-1 和 mAP 方面为 8%,mAP 方面为 74.8%,在室内搜索模式下,SYSU-MM01 数据集上 Rank-1 和 mAP 方面为 86.6%,mAP 为 88.3%。代码发布于: https://github.com/ZYK100/MMN。

更新日期:2024-11-09
down
wechat
bug