当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Expanding and Refining Hybrid Compressors for Efficient Object Re-Identification
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 6-12-2024 , DOI: 10.1109/tip.2024.3410684
Yi Xie 1 , Hanxiao Wu 2 , Jianqing Zhu 1 , Huanqiang Zeng 1 , Jing Zhang 3
Affiliation  

Recent object re-identification (Re-ID) methods gain high efficiency via lightweight student models trained by knowledge distillation (KD). However, the huge architectural difference between lightweight students and heavy teachers causes students to have difficulties in receiving and understanding teachers’ knowledge, thus losing certain accuracy. To this end, we propose a refiner-expander-refiner (RER) structure to enlarge a student’s representational capacity and prune the student’s complexity. The expander is a multi-branch convolutional layer to expand the student’s representational capacity to understand a teacher’s knowledge comprehensively, which does not require any feature-dimensional adapter to avoid knowledge distortions. The two refiners are $1\times 1$ convolutional layers to prune the input and output channels of the expander. In addition, in order to alleviate the competition accuracy-related and pruning-related gradients, we design a common consensus gradient resetting (CCGR) method, which discards unimportant channels according to the intersection of each sample’s unimportant channel judgment. Finally, the trained RER can be simplified into a slim convolutional layer via re-parameterization to speed up inference. As a result, we propose an expanding and refining hybrid compressing (ERHC) method. Extensive experiments show that our ERHC has superior inference speed and accuracy, e.g., on the VeRi-776 dataset, given the ResNet101 as a teacher, ERHC saves 75.33% model parameters (MP) and 74.29% floating-point of operations (FLOPs) without sacrificing accuracy.

中文翻译:


扩展和完善混合压缩机以实现高效的对象重新识别



最近的对象重新识别(Re-ID)方法通过知识蒸馏(KD)训练的轻量级学生模型获得了高效率。然而,轻量级学生和重度教师之间巨大的架构差异导致学生难以接受和理解教师的知识,从而失去一定的准确性。为此,我们提出了一种细化器-扩展器-细化器(RER)结构,以扩大学生的表征能力并修剪学生的复杂性。扩展器是一个多分支卷积层,用于扩展学生的表征能力以全面理解教师的知识,不需要任何特征维度适配器来避免知识扭曲。这两个炼油厂分别是$1\乘以1$卷积层来修剪扩展器的输入和输出通道。此外,为了缓解竞争精度相关和剪枝相关的梯度,我们设计了一种通用的共识梯度重置(CCGR)方法,该方法根据每个样本的不重要通道判断的交集来丢弃不重要的通道。最后,经过训练的 RER 可以通过重新参数化简化为细长的卷积层,以加快推理速度。因此,我们提出了一种扩展和细化混合压缩(ERHC)方法。大量实验表明,我们的 ERHC 具有优越的推理速度和准确性,例如,在 VeRi-776 数据集上,以 ResNet101 作为教师,ERHC 节省了 75.33% 的模型参数(MP)和 74.29% 的浮点运算(FLOPs),而无需牺牲准确性。
更新日期:2024-08-19
down
wechat
bug