当前位置:
X-MOL 学术
›
Comput. Methods Appl. Mech. Eng.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Partitioned neural network approximation for partial differential equations enhanced with Lagrange multipliers and localized loss functions
Computer Methods in Applied Mechanics and Engineering ( IF 6.9 ) Pub Date : 2024-07-02 , DOI: 10.1016/j.cma.2024.117168 Deok-Kyu Jang , Kyungsoo Kim , Hyea Hyun Kim
Computer Methods in Applied Mechanics and Engineering ( IF 6.9 ) Pub Date : 2024-07-02 , DOI: 10.1016/j.cma.2024.117168 Deok-Kyu Jang , Kyungsoo Kim , Hyea Hyun Kim
Partitioned neural network functions are used to approximate the solution of partial differential equations. The problem domain is partitioned into non-overlapping subdomains and the partitioned neural network functions are defined on the given non-overlapping subdomains. Each neural network function then approximates the solution in one subdomain. To obtain the convergent neural network solution, certain continuity conditions on the partitioned neural network functions across the subdomain interface need to be included in the loss function, that is used to train the parameters in the neural network functions. In our work, by introducing suitable interface values, the loss function is reformulated into a sum of localized loss functions and each localized loss function is used to train the corresponding local neural network parameters. In addition, to accelerate the neural network solution convergence, the localized loss function is enriched with an augmented Lagrangian term, where the interface condition and the boundary condition are enforced as constraints on the local solutions by using Lagrange multipliers. The local neural network parameters and Lagrange multipliers are then found by optimizing the localized loss function. To take the advantage of the localized loss function for the parallel computation, an iterative algorithm is also proposed. For the proposed algorithms, their training performance and convergence are numerically studied for various test examples.
中文翻译:
使用拉格朗日乘子和局部损失函数增强的偏微分方程的分区神经网络近似
分区神经网络函数用于近似求解偏微分方程。问题域被划分为不重叠的子域,并且在给定的不重叠的子域上定义划分的神经网络函数。然后,每个神经网络函数都会在一个子域中逼近解。为了获得收敛的神经网络解,需要在损失函数中包含跨子域接口的分区神经网络函数的某些连续性条件,用于训练神经网络函数中的参数。在我们的工作中,通过引入合适的接口值,将损失函数重新表述为局部损失函数的总和,并且每个局部损失函数用于训练相应的局部神经网络参数。此外,为了加速神经网络解的收敛,局部损失函数通过增强拉格朗日项进行了丰富,其中通过使用拉格朗日乘子将界面条件和边界条件强制作为局部解的约束。然后通过优化局部损失函数找到局部神经网络参数和拉格朗日乘子。为了利用局部损失函数进行并行计算,还提出了迭代算法。对于所提出的算法,针对各种测试示例对其训练性能和收敛性进行了数值研究。
更新日期:2024-07-02
中文翻译:
使用拉格朗日乘子和局部损失函数增强的偏微分方程的分区神经网络近似
分区神经网络函数用于近似求解偏微分方程。问题域被划分为不重叠的子域,并且在给定的不重叠的子域上定义划分的神经网络函数。然后,每个神经网络函数都会在一个子域中逼近解。为了获得收敛的神经网络解,需要在损失函数中包含跨子域接口的分区神经网络函数的某些连续性条件,用于训练神经网络函数中的参数。在我们的工作中,通过引入合适的接口值,将损失函数重新表述为局部损失函数的总和,并且每个局部损失函数用于训练相应的局部神经网络参数。此外,为了加速神经网络解的收敛,局部损失函数通过增强拉格朗日项进行了丰富,其中通过使用拉格朗日乘子将界面条件和边界条件强制作为局部解的约束。然后通过优化局部损失函数找到局部神经网络参数和拉格朗日乘子。为了利用局部损失函数进行并行计算,还提出了迭代算法。对于所提出的算法,针对各种测试示例对其训练性能和收敛性进行了数值研究。