Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
SIGEST
SIAM Review ( IF 10.8 ) Pub Date : 2024-08-08 , DOI: 10.1137/24n975943 The Editors
SIAM Review ( IF 10.8 ) Pub Date : 2024-08-08 , DOI: 10.1137/24n975943 The Editors
SIAM Review, Volume 66, Issue 3, Page 533-533, May 2024.
The SIGEST article in this issue is “Operator Learning Using Random Features: A Tool for Scientific Computing,” by Nicholas H. Nelsen and Andrew M. Stuart. This work considers the problem of operator learning in infinite-dimensional Banach spaces through the use of random features. The driving application is the approximation of solution operators to partial differential equations (PDEs), here foremost time-dependent problems, that are naturally posed in an infinite-dimensional function space. Typically here, in contrast to the mainstream big data regimes of machine learning applications such as computer vision, high resolution data coming from physical experiments or from computationally expensive simulations of such differential equations is usually small. Fast and approximate surrogates built from such data can be advantageous in building forward models for inverse problems or for doing uncertainty quantification, for instance. Showing how this can be done in infinite dimensions gives rise to approximators which are at the outset resolution and discretization invariant, allowing training on one resolution and deploying on another. At the heart of this work is the function-valued random features methodology that the authors extended from the finite setting of the classical random features approach. Here, the nonlinear operator is approximated by a linear combination of random operators which turn out to be a low-rank approximation and whose computation amounts to a convex, quadratic optimisation problem that is efficiently solvable and for which convergence guarantees can be derived. The methodology is then concretely applied to two concrete PDE examples: Burgers' equations and Darcy flow, demonstrating the applicability of the function-valued random features method, its scalability, discretization invariance, and transferability. The original 2021 article, which appeared in SIAM's Journal on Scientific Computing, has attracted considerable attention. In preparing this SIGEST version, the authors have made numerous modifications and revisions. These include expanding the introductory section and the concluding remarks, condensing the technical content and making it more accessible, and adding a link to an open access GitHub repository that contains all data and code used to produce the results in the paper.
更新日期:2024-08-08
The SIGEST article in this issue is “Operator Learning Using Random Features: A Tool for Scientific Computing,” by Nicholas H. Nelsen and Andrew M. Stuart. This work considers the problem of operator learning in infinite-dimensional Banach spaces through the use of random features. The driving application is the approximation of solution operators to partial differential equations (PDEs), here foremost time-dependent problems, that are naturally posed in an infinite-dimensional function space. Typically here, in contrast to the mainstream big data regimes of machine learning applications such as computer vision, high resolution data coming from physical experiments or from computationally expensive simulations of such differential equations is usually small. Fast and approximate surrogates built from such data can be advantageous in building forward models for inverse problems or for doing uncertainty quantification, for instance. Showing how this can be done in infinite dimensions gives rise to approximators which are at the outset resolution and discretization invariant, allowing training on one resolution and deploying on another. At the heart of this work is the function-valued random features methodology that the authors extended from the finite setting of the classical random features approach. Here, the nonlinear operator is approximated by a linear combination of random operators which turn out to be a low-rank approximation and whose computation amounts to a convex, quadratic optimisation problem that is efficiently solvable and for which convergence guarantees can be derived. The methodology is then concretely applied to two concrete PDE examples: Burgers' equations and Darcy flow, demonstrating the applicability of the function-valued random features method, its scalability, discretization invariance, and transferability. The original 2021 article, which appeared in SIAM's Journal on Scientific Computing, has attracted considerable attention. In preparing this SIGEST version, the authors have made numerous modifications and revisions. These include expanding the introductory section and the concluding remarks, condensing the technical content and making it more accessible, and adding a link to an open access GitHub repository that contains all data and code used to produce the results in the paper.