当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Relation-Guided Adversarial Learning for Data-Free Knowledge Transfer
International Journal of Computer Vision ( IF 11.6 ) Pub Date : 2024-12-13 , DOI: 10.1007/s11263-024-02303-4
Yingping Liang, Ying Fu

Data-free knowledge distillation transfers knowledge by recovering training data from a pre-trained model. Despite the recent success of seeking global data diversity, the diversity within each class and the similarity among different classes are largely overlooked, resulting in data homogeneity and limited performance. In this paper, we introduce a novel Relation-Guided Adversarial Learning method with triplet losses, which solves the homogeneity problem from two aspects. To be specific, our method aims to promote both intra-class diversity and inter-class confusion of the generated samples. To this end, we design two phases, an image synthesis phase and a student training phase. In the image synthesis phase, we construct an optimization process to push away samples with the same labels and pull close samples with different labels, leading to intra-class diversity and inter-class confusion, respectively. Then, in the student training phase, we perform an opposite optimization, which adversarially attempts to reduce the distance of samples of the same classes and enlarge the distance of samples of different classes. To mitigate the conflict of seeking high global diversity and keeping inter-class confusing, we propose a focal weighted sampling strategy by selecting the negative in the triplets unevenly within a finite range of distance. RGAL shows significant improvement over previous state-of-the-art methods in accuracy and data efficiency. Besides, RGAL can be inserted into state-of-the-art methods on various data-free knowledge transfer applications. Experiments on various benchmarks demonstrate the effectiveness and generalizability of our proposed method on various tasks, specially data-free knowledge distillation, data-free quantization, and non-exemplar incremental learning. Our code will be publicly available to the community.



中文翻译:


关系引导的对抗性学习,实现无数据知识转移



无数据知识蒸馏通过从预先训练的模型中恢复训练数据来传输知识。尽管最近在寻求全球数据多样性方面取得了成功,但每个类内部的多样性和不同类之间的相似性在很大程度上被忽视了,从而导致数据同质化和性能有限。在本文中,我们介绍了一种新的具有三元组损失的关系引导对抗性学习方法,该方法从两个方面解决了同质性问题。具体来说,我们的方法旨在促进生成样本的类内多样性和类间混淆。为此,我们设计了两个阶段,一个图像合成阶段和一个学生培训阶段。在图像合成阶段,我们构建了一个优化过程,将具有相同标签的样本推开,并拉出不同标签的接近样本,分别导致类内多样性和类间混淆。然后,在 student 训练阶段,我们执行相反的优化,对抗性地尝试减少相同类样本的距离,并扩大不同类样本的距离。为了缓解寻求高全局多样性和保持类间混淆的冲突,我们提出了一种焦点加权抽样策略,通过在有限距离范围内不均匀地选择三元组中的负数。RGAL 在准确性和数据效率方面显示出比以前最先进的方法有显著改进。此外,RGAL 可以插入到各种无数据知识转移应用程序的最新方法中。 在各种基准上的实验证明了我们提出的方法在各种任务上的有效性和普遍性,特别是无数据知识蒸馏、无数据量化和非示例性增量学习。我们的代码将向社区公开。

更新日期:2024-12-13
down
wechat
bug