当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-objective meta-learning
Artificial Intelligence ( IF 5.1 ) Pub Date : 2024-07-25 , DOI: 10.1016/j.artint.2024.104184
Feiyang Ye , Baijiong Lin , Zhixiong Yue , Yu Zhang , Ivor W. Tsang

Meta-learning has arisen as a powerful tool for many machine learning problems. With multiple factors to be considered when designing learning models for real-world applications, meta-learning with multiple objectives has attracted much attention recently. However, existing works either linearly combine multiple objectives into one objective or adopt evolutionary algorithms to handle it, where the former approach needs to pay high computational cost to tune the combination coefficients while the latter approach is computationally heavy and incapable to be integrated into gradient-based optimization. To alleviate those limitations, in this paper, we aim to propose a generic gradient-based Multi-Objective Meta-Learning (MOML) framework with applications in many machine learning problems. Specifically, the MOML framework formulates the objective function of meta-learning with multiple objectives as a Multi-Objective Bi-Level optimization Problem (MOBLP) where the upper-level subproblem is to solve several possibly conflicting objectives for the meta-learner. Different from those existing works, in this paper, we propose a gradient-based algorithm to solve the MOBLP. Specifically, we devise the first gradient-based optimization algorithm by alternately solving the lower-level and upper-level subproblems via the gradient descent method and the gradient-based multi-objective optimization method, respectively. Theoretically, we prove the convergence property and provide a non-asymptotic analysis of the proposed gradient-based optimization algorithm. Empirically, extensive experiments justify our theoretical results and demonstrate the superiority of the proposed MOML framework for different learning problems, including few-shot learning, domain adaptation, multi-task learning, neural architecture search, and reinforcement learning. The source code of MOML is available at .

中文翻译:


多目标元学习



元学习已成为解决许多机器学习问题的强大工具。由于在设计实际应用的学习模型时需要考虑多种因素,多目标元学习最近引起了广泛关注。然而,现有的工作要么将多个目标线性组合成一个目标,要么采用进化算法来处理,前一种方法需要付出高昂的计算成本来调整组合系数,而后一种方法计算量大且无法集成到梯度中。基于优化。为了减轻这些限制,在本文中,我们的目标是提出一种通用的基于梯度的多目标元学习(MOML)框架,并应用于许多机器学习问题。具体来说,MOML 框架将具有多个目标的元学习的目标函数制定为多目标双层优化问题(MOBLP),其中上层子问题是为元学习器解决几个可能相互冲突的目标。与现有的工作不同,本文提出了一种基于梯度的算法来求解 MOBLP。具体来说,我们通过分别通过梯度下降法和基于梯度的多目标优化方法交替求解下层和上层子问题,设计了第一个基于梯度的优化算法。从理论上讲,我们证明了收敛性,并对所提出的基于梯度的优化算法进行了非渐近分析。 根据经验,大量的实验证明了我们的理论结果,并证明了所提出的 MOML 框架对于不同学习问题的优越性,包括小样本学习、领域适应、多任务学习、神经架构搜索和强化学习。 MOML 的源代码可在 处获得。
更新日期:2024-07-25
down
wechat
bug