当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enhancing few-shot lifelong learning through fusion of cross-domain knowledge
Information Fusion ( IF 14.7 ) Pub Date : 2024-10-11 , DOI: 10.1016/j.inffus.2024.102730
Yaoyue Zheng, Xuetao Zhang, Zhiqiang Tian, Shaoyi Du

Humans can continually solve new problems with a few examples and enhance their learned knowledge by incorporating new ones. Few-shot lifelong learning (FSLL) has been presented to mimic human learning ability. However, they overlook the significance of cross-domain knowledge and little effort has been made to investigate it. In this paper, we explore the effects of cross-domain knowledge in FSLL and propose a new framework to enhance the model’s ability by fusing cross-domain knowledge into the learning process. Moreover, we investigate the impact of both debiased and non-debiased models in the FSLL context for the first time. Compared with previous works, our setting presents a unique challenge: the model should continually learn new knowledge from cross-domain few-shot data and update its existing knowledge by fusing new knowledge throughout its lifelong learning process. To address this challenge, the proposed framework focuses on learning and updating while migrating the well-known issues of forgetting and overfitting. The framework comprises three key components designed for learning cross-domain knowledge: the Debiased Base Learning strategy, Knowledge Acquisition, and Knowledge Update. The superiority of the framework is validated on mini-ImageNet, CIFAR-100, OfficeHome, and Meta-Dataset. Experiments show that the proposed framework exhibits the capability to perform in cross-domain situations and also achieves state-of-the-art performance in the non-cross-domain situation.

中文翻译:


通过融合跨领域知识来增强少数样本的终身学习



人类可以通过几个例子不断解决新问题,并通过结合新例子来增强他们所学的知识。Few-shot 终身学习 (FSLL) 已被提出来模拟人类的学习能力。然而,他们忽视了跨领域知识的重要性,并且很少努力对其进行调查。在本文中,我们探讨了跨领域知识在 FSLL 中的影响,并提出了一个新的框架,通过将跨领域知识融合到学习过程中来增强模型的能力。此外,我们首次研究了 FSLL 上下文中去偏和非去偏模型的影响。与以往的工作相比,我们的设置提出了一个独特的挑战:模型应该不断从跨领域的小样本数据中学习新知识,并在其终身学习过程中通过融合新知识来更新其现有知识。为了应对这一挑战,所提出的框架侧重于学习和更新,同时迁移众所周知的遗忘和过拟合问题。该框架包括三个关键组件,旨在学习跨领域知识:Debiased Base Learning 策略、Knowledge Acquisition 和 Knowledge Update。该框架的优越性在 mini-ImageNet、CIFAR-100、OfficeHome 和 Meta-Dataset 上得到了验证。实验表明,所提出的框架表现出在跨域情况下执行的能力,并且在非跨域情况下也实现了最先进的性能。
更新日期:2024-10-11
down
wechat
bug