当前位置: X-MOL 学术Comput. Math. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Iterative algorithms for partitioned neural network approximation to partial differential equations
Computers & Mathematics with Applications ( IF 2.9 ) Pub Date : 2024-07-22 , DOI: 10.1016/j.camwa.2024.07.007
Hee Jun Yang , Hyea Hyun Kim

To enhance solution accuracy and training efficiency in neural network approximation to partial differential equations, partitioned neural networks can be used as a solution surrogate instead of a single large and deep neural network defined on the whole problem domain. In such a partitioned neural network approach, suitable interface conditions or subdomain boundary conditions are combined to obtain a convergent approximate solution. However, there has been no rigorous study on the convergence and parallel computing enhancement on the partitioned neural network approach. In this paper, iterative algorithms are proposed to enhance parallel computation performance in the partitioned neural network approximation. Our iterative algorithms are based on classical additive Schwarz domain decomposition methods. For the proposed iterative algorithms, their convergence is analyzed under an error assumption on the local and coarse neural network solutions. Numerical results are also included to show the performance of the proposed iterative algorithms.

中文翻译:


偏微分方程的分区神经网络逼近的迭代算法



为了提高偏微分方程的神经网络逼近的解精度和训练效率,可以使用分区神经网络作为解代理,而不是在整个问题域上定义的单个大型且深层的神经网络。在这种分区神经网络方法中,组合合适的界面条件或子域边界条件以获得收敛近似解。然而,目前还没有对分区神经网络方法的收敛性和并行计算增强进行严格的研究。在本文中,提出了迭代算法来增强分区神经网络近似中的并行计算性能。我们的迭代算法基于经典的加性施瓦茨域分解方法。对于所提出的迭代算法,在局部和粗神经网络解的误差假设下分析了它们的收敛性。还包括数值结果以显示所提出的迭代算法的性能。
更新日期:2024-07-22
down
wechat
bug