当前位置:
X-MOL 学术
›
Comput. Methods Appl. Mech. Eng.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Transformers as neural operators for solutions of differential equations with finite regularity
Computer Methods in Applied Mechanics and Engineering ( IF 6.9 ) Pub Date : 2024-11-28 , DOI: 10.1016/j.cma.2024.117560 Benjamin Shih, Ahmad Peyvan, Zhongqiang Zhang, George Em Karniadakis
Computer Methods in Applied Mechanics and Engineering ( IF 6.9 ) Pub Date : 2024-11-28 , DOI: 10.1016/j.cma.2024.117560 Benjamin Shih, Ahmad Peyvan, Zhongqiang Zhang, George Em Karniadakis
Neural operator learning models have emerged as very effective surrogates in data-driven methods for partial differential equations (PDEs) across different applications from computational science and engineering. Such operator learning models not only predict particular instances of a physical or biological system in real-time but also forecast classes of solutions corresponding to a distribution of initial and boundary conditions or forcing terms. DeepONet is the first neural operator model and has been tested extensively for a broad class of solutions, including Riemann problems. Transformers have not been used in that capacity, and specifically, they have not been tested for solutions of PDEs with low regularity. In this work, we first establish the theoretical groundwork that transformers possess the universal approximation property as operator learning models. We then apply transformers to forecast solutions of diverse dynamical systems with solutions of finite regularity for a plurality of initial conditions and forcing terms. In particular, we consider three examples: the Izhikevich neuron model, the tempered fractional-order Leaky Integrate-and-Fire (LIF) model, and the one-dimensional Euler equation Riemann problem. For the latter problem, we also compare with variants of DeepONet, and we find that transformers outperform DeepONet in accuracy but they are computationally more expensive.
中文翻译:
Transformers 作为具有有限规律的微分方程解的神经运算符
神经算子学习模型已成为计算科学和工程等不同应用中偏微分方程 (PDE) 数据驱动方法中非常有效的替代模型。这种算子学习模型不仅可以实时预测物理或生物系统的特定实例,还可以预测与初始条件和边界条件或强迫项的分布相对应的解类别。DeepONet 是第一个神经算子模型,并已针对包括黎曼问题在内的广泛解决方案进行了广泛测试。变压器尚未用于该容量,具体来说,它们尚未针对低规则性的 PDE 解进行测试。在这项工作中,我们首先建立了变压器作为算子学习模型具有通用近似特性的理论基础。然后,我们应用变压器来预测各种动力学系统的解,其中对多个初始条件和强迫项具有有限规律的解。特别是,我们考虑了三个例子:Izhikevich 神经元模型、回火分数阶 Leaky Integrate-and-Fire (LIF) 模型和一维欧拉方程黎曼问题。对于后一个问题,我们还与 DeepONet 的变体进行了比较,我们发现转换器在准确性上优于 DeepONet,但它们的计算成本更高。
更新日期:2024-11-28
中文翻译:
Transformers 作为具有有限规律的微分方程解的神经运算符
神经算子学习模型已成为计算科学和工程等不同应用中偏微分方程 (PDE) 数据驱动方法中非常有效的替代模型。这种算子学习模型不仅可以实时预测物理或生物系统的特定实例,还可以预测与初始条件和边界条件或强迫项的分布相对应的解类别。DeepONet 是第一个神经算子模型,并已针对包括黎曼问题在内的广泛解决方案进行了广泛测试。变压器尚未用于该容量,具体来说,它们尚未针对低规则性的 PDE 解进行测试。在这项工作中,我们首先建立了变压器作为算子学习模型具有通用近似特性的理论基础。然后,我们应用变压器来预测各种动力学系统的解,其中对多个初始条件和强迫项具有有限规律的解。特别是,我们考虑了三个例子:Izhikevich 神经元模型、回火分数阶 Leaky Integrate-and-Fire (LIF) 模型和一维欧拉方程黎曼问题。对于后一个问题,我们还与 DeepONet 的变体进行了比较,我们发现转换器在准确性上优于 DeepONet,但它们的计算成本更高。