当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploring Fine-Grained Representation and Recomposition for Cloth-Changing Person Re-Identification
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 6-14-2024 , DOI: 10.1109/tifs.2024.3414667
Qizao Wang 1 , Xuelin Qian 2 , Bin Li 1 , Xiangyang Xue 1 , Yanwei Fu 3
Affiliation  

Cloth-changing person Re-IDentification (Re-ID) is a particularly challenging task, suffering from two limitations of inferior discriminative features and limited training samples. Existing methods mainly leverage auxiliary information to facilitate identity-relevant feature learning, including soft-biometrics features of shapes or gaits, and additional labels of clothing. However, this information may be unavailable in real-world applications. In this paper, we propose a novel FIne-grained Representation and Recomposition (FIRe2) framework to tackle both limitations without any auxiliary annotation or data. Specifically, we first design a Fine-grained Feature Mining (FFM) module to separately cluster images of each person. Images with similar so-called fine-grained attributes (e.g., clothes and viewpoints) are encouraged to cluster together. An attribute-aware classification loss is introduced to perform fine-grained learning based on cluster labels, which are not shared among different people, promoting the model to learn identity-relevant features. Furthermore, to take full advantage of fine-grained attributes, we present a Fine-grained Attribute Recomposition (FAR) module by recomposing image features with different attributes in the latent space. It significantly enhances robust feature learning. Extensive experiments demonstrate that FIRe2 can achieve state-of-the-art performance on five widely-used cloth-changing person Re-ID benchmarks. The code is available at https://github.com/QizaoWang/FIRe-CCReID.

中文翻译:


探索换衣者重新识别的细粒度表示和重组



换衣服的人重新识别(Re-ID)是一项特别具有挑战性的任务,受到较差的判别特征和有限的训练样本两个限制。现有方法主要利用辅助信息来促进身份相关特征学习,包括形状或步态的软生物特征以及服装的附加标签。然而,这些信息在实际应用中可能不可用。在本文中,我们提出了一种新颖的细粒度表示和重组(FIRe2)框架来解决这两个限制,而无需任何辅助注释或数据。具体来说,我们首先设计一个细粒度特征挖掘(FFM)模块来单独聚类每个人的图像。鼓励具有相似的所谓细粒度属性(例如,衣服和观点)的图像聚集在一起。引入属性感知分类损失来基于聚类标签进行细粒度学习,这些标签在不同的人之间不共享,从而促进模型学习身份相关特征。此外,为了充分利用细粒度属性,我们通过在潜在空间中重组具有不同属性的图像特征,提出了细粒度属性重组(FAR)模块。它显着增强了鲁棒的特征学习。大量实验表明,FIRe2 可以在五个广泛使用的换衣人员 Re-ID 基准测试中实现最先进的性能。代码可在 https://github.com/QizaoWang/FIRe-CCReID 获取。
更新日期:2024-08-22
down
wechat
bug