当前位置: X-MOL 学术Adv. Comput. Math. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Randomized greedy magic point selection schemes for nonlinear model reduction
Advances in Computational Mathematics ( IF 1.7 ) Pub Date : 2024-07-22 , DOI: 10.1007/s10444-024-10172-1
Ralf Zimmermann , Kai Cheng

An established way to tackle model nonlinearities in projection-based model reduction is via relying on partial information. This idea is shared by the methods of gappy proper orthogonal decomposition (POD), missing point estimation (MPE), masked projection, hyper reduction, and the (discrete) empirical interpolation method (DEIM). The selected indices of the partial information components are often referred to as “magic points.” The original contribution of the work at hand is a novel randomized greedy magic point selection. It is known that the greedy method is associated with minimizing the norm of an oblique projection operator, which, in turn, is associated with solving a sequence of rank-one SVD update problems. We propose simplification measures so that the resulting greedy point selection has the following main features: (1) The inherent rank-one SVD update problem is tackled in a way, such that its dimension does not grow with the number of selected magic points. (2) The approach is online efficient in the sense that the computational costs are independent from the dimension of the full-scale model. To the best of our knowledge, this is the first greedy magic point selection that features this property. We illustrate the findings by means of numerical examples. We find that the computational cost of the proposed method is orders of magnitude lower than that of its deterministic counterpart. Nevertheless, the prediction accuracy is just as good if not better. When compared to a state-of-the-art randomized method based on leverage scores, the randomized greedy method outperforms its competitor.



中文翻译:


用于非线性模型简化的随机贪婪魔点选择方案



在基于投影的模型简化中解决模型非线性的一种既定方法是依靠部分信息。间隙本征正交分解 (POD)、缺失点估计 (MPE)、掩模投影、超缩减和(离散)经验插值法 (DEIM) 等方法都共享这一想法。部分信息分量的选定索引通常被称为“魔点”。手头的工作的原始贡献是一种新颖的随机贪婪魔点选择。众所周知,贪婪方法与最小化斜投影算子的范数相关,而斜投影算子的范数又与求解一系列一阶 SVD 更新问题相关。我们提出了简化措施,使得最终的贪心点选择具有以下主要特征:(1)以某种方式解决固有的一阶 SVD 更新问题,使其维度不会随着所选魔点的数量而增长。 (2) 该方法是在线高效的,因为计算成本与全尺寸模型的维度无关。据我们所知,这是第一个具有此属性的贪婪魔法点选择。我们通过数值例子来说明研究结果。我们发现所提出的方法的计算成本比其确定性对应方法低几个数量级。尽管如此,预测精度即使不是更好,也同样好。与基于杠杆分数的最先进的随机方法相比,随机贪婪方法优于其竞争对手。

更新日期:2024-07-22
down
wechat
bug