当前位置:
X-MOL 学术
›
Comput. Methods Appl. Mech. Eng.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
Interpretable A-posteriori error indication for graph neural network surrogate models
Computer Methods in Applied Mechanics and Engineering ( IF 6.9 ) Pub Date : 2024-11-15 , DOI: 10.1016/j.cma.2024.117509 Shivam Barwey, Hojin Kim, Romit Maulik
Computer Methods in Applied Mechanics and Engineering ( IF 6.9 ) Pub Date : 2024-11-15 , DOI: 10.1016/j.cma.2024.117509 Shivam Barwey, Hojin Kim, Romit Maulik
Data-driven surrogate modeling has surged in capability in recent years with the emergence of graph neural networks (GNNs), which can operate directly on mesh-based representations of data. The goal of this work is to introduce an interpretability enhancement procedure for GNNs, with application to unstructured mesh-based fluid dynamics modeling. Given a black-box baseline GNN model, the end result is an interpretable GNN model that isolates regions in physical space, corresponding to sub-graphs, that are intrinsically linked to the forecasting task while retaining the predictive capability of the baseline. These structures identified by the interpretable GNNs are adaptively produced in the forward pass and serve as explainable links between the baseline model architecture, the optimization goal, and known problem-specific physics. Additionally, through a regularization procedure, the interpretable GNNs can also be used to identify, during inference, graph nodes that correspond to a majority of the anticipated forecasting error, adding a novel interpretable error-tagging capability to baseline models. Demonstrations are performed using unstructured flow field data sourced from flow over a backward-facing step at high Reynolds numbers, with geometry extrapolations demonstrated for ramp and wall-mounted cube configurations.
中文翻译:
图神经网络代理模型的可解释 A 后验误差指示
近年来,随着图形神经网络 (GNN) 的出现,数据驱动的代理建模功能激增,图形神经网络可以直接对基于网格的数据表示进行操作。这项工作的目标是引入 GNN 的可解释性增强程序,并应用于基于非结构化网格的流体动力学建模。给定一个黑盒基线 GNN 模型,最终结果是一个可解释的 GNN 模型,该模型隔离了物理空间中与预测任务有内在联系的区域,对应于子图,同时保留了基线的预测能力。这些由可解释 GNN 识别的结构是在前向传递中自适应生成的,并用作基线模型架构、优化目标和已知问题特定物理场之间的可解释链接。此外,通过正则化过程,可解释的 GNN 还可用于在推理过程中识别与大多数预期预测误差相对应的图形节点,从而为基线模型增加了一种新的可解释错误标记功能。使用非结构化流场数据进行演示,这些数据来源于高雷诺数下后向步骤的流动,并演示了斜坡和壁挂式立方体配置的几何外推。
更新日期:2024-11-15
中文翻译:
图神经网络代理模型的可解释 A 后验误差指示
近年来,随着图形神经网络 (GNN) 的出现,数据驱动的代理建模功能激增,图形神经网络可以直接对基于网格的数据表示进行操作。这项工作的目标是引入 GNN 的可解释性增强程序,并应用于基于非结构化网格的流体动力学建模。给定一个黑盒基线 GNN 模型,最终结果是一个可解释的 GNN 模型,该模型隔离了物理空间中与预测任务有内在联系的区域,对应于子图,同时保留了基线的预测能力。这些由可解释 GNN 识别的结构是在前向传递中自适应生成的,并用作基线模型架构、优化目标和已知问题特定物理场之间的可解释链接。此外,通过正则化过程,可解释的 GNN 还可用于在推理过程中识别与大多数预期预测误差相对应的图形节点,从而为基线模型增加了一种新的可解释错误标记功能。使用非结构化流场数据进行演示,这些数据来源于高雷诺数下后向步骤的流动,并演示了斜坡和壁挂式立方体配置的几何外推。