当前位置: X-MOL 学术Comput. Methods Appl. Mech. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MODNO: Multi-Operator learning with Distributed Neural Operators
Computer Methods in Applied Mechanics and Engineering ( IF 6.9 ) Pub Date : 2024-07-30 , DOI: 10.1016/j.cma.2024.117229
Zecheng Zhang

The study of operator learning involves the utilization of neural networks to approximate operators. Traditionally, the focus has been on single-operator learning (SOL). However, recent advances have rapidly expanded this to include the approximation of multiple operators using foundation models equipped with millions or billions of trainable parameters, leading to the research of multi-operator learning (MOL). In this paper, we present a novel distributed training approach aimed at enabling a single neural operator with significantly fewer parameters to effectively tackle multi-operator learning challenges, all without incurring additional average costs. Our method is applicable to various neural operators, such as the Deep Operator Neural Networks (DON). The core idea is to independently learn the output basis functions for each operator using its dedicated data, while simultaneously centralizing the learning of the input function encoding shared by all operators using the entire dataset. Through a systematic study of five numerical examples, we compare the accuracy and cost of training a single neural operator for each operator independently versus training a MOL model using our proposed method. Our results demonstrate enhanced efficiency and satisfactory accuracy. Moreover, our approach illustrates that some operators with limited data can be more effectively constructed with the aid of data from analogous operators through MOL learning. This highlights another MOL’s potential to bolster operator learning.

中文翻译:


MODNO:使用分布式神经算子进行多算子学习



算子学习的研究涉及利用神经网络来逼近算子。传统上,重点是单操作员学习(SOL)。然而,最近的进展迅速将其扩展到包括使用配备数百万或数十亿可训练参数的基础模型来逼近多个算子,从而引发了多算子学习(MOL)的研究。在本文中,我们提出了一种新颖的分布式训练方法,旨在使参数显着减少的单个神经算子能够有效地解决多算子学习挑战,并且不会产生额外的平均成本。我们的方法适用于各种神经算子,例如深度算子神经网络(DON)。核心思想是使用每个算子的专用数据独立学习输出基函数,同时集中学习所有算子使用整个数据集共享的输入函数编码。通过对五个数值示例的系统研究,我们比较了独立训练每个算子的单个神经算子与使用我们提出的方法训练 MOL 模型的准确性和成本。我们的结果证明了效率的提高和令人满意的准确性。此外,我们的方法表明,通过 MOL 学习,借助类似算子的数据可以更有效地构建一些数据有限的算子。这凸显了另一个 MOL 在促进操作员学习方面的潜力。
更新日期:2024-07-30
down
wechat
bug