当前位置: X-MOL 学术Nat. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Distributed constrained combinatorial optimization leveraging hypergraph neural networks
Nature Machine Intelligence ( IF 18.8 ) Pub Date : 2024-05-30 , DOI: 10.1038/s42256-024-00833-7
Nasimeh Heydaribeni , Xinrui Zhan , Ruisi Zhang , Tina Eliassi-Rad , Farinaz Koushanfar

Scalable addressing of high-dimensional constrained combinatorial optimization problems is a challenge that arises in several science and engineering disciplines. Recent work introduced novel applications of graph neural networks for solving quadratic-cost combinatorial optimization problems. However, effective utilization of models such as graph neural networks to address general problems with higher-order constraints is an unresolved challenge. This paper presents a framework, HypOp, that advances the state of the art for solving combinatorial optimization problems in several aspects: (1) it generalizes the prior results to higher-order constrained problems with arbitrary cost functions by leveraging hypergraph neural networks; (2) it enables scalability to larger problems by introducing a new distributed and parallel training architecture; (3) it demonstrates generalizability across different problem formulations by transferring knowledge within the same hypergraph; (4) it substantially boosts the solution accuracy compared with the prior art by suggesting a fine-tuning step using simulated annealing; and (5) it shows remarkable progress on numerous benchmark examples, including hypergraph MaxCut, satisfiability and resource allocation problems, with notable run-time improvements using a combination of fine-tuning and distributed training techniques. We showcase the application of HypOp in scientific discovery by solving a hypergraph MaxCut problem on a National Drug Code drug-substance hypergraph. Through extensive experimentation on various optimization problems, HypOp demonstrates superiority over existing unsupervised-learning-based solvers and generic optimization methods.



中文翻译:


利用超图神经网络的分布式约束组合优化



高维约束组合优化问题的可扩展解决是多个科学和工程学科中出现的挑战。最近的工作介绍了图神经网络在解决二次成本组合优化问题中的新颖应用。然而,有效利用图神经网络等模型来解决具有高阶约束的一般问题是一个尚未解决的挑战。本文提出了一个框架 HypOp,它在几个方面推进了解决组合优化问题的最新技术:(1)它利用超图神经网络将先验结果推广到具有任意成本函数的高阶约束问题; (2)通过引入新的分布式并行训练架构,可以扩展到更大的问题; (3)它通过在同一超图中转移知识来证明不同问题表述的普遍性; (4)通过提出使用模拟退火的微调步骤,与现有技术相比,大大提高了求解精度; (5)它在许多基准示例上显示出显着的进步,包括超图 MaxCut、可满足性和资源分配问题,并使用微调和分布式训练技术的组合实现了显着的运行时改进。我们通过解决国家药品代码药物超图上的超图 MaxCut 问题来展示 HypOp 在科学发现中的应用。通过对各种优化问题的广泛实验,HypOp 展示了优于现有基于无监督学习的求解器和通用优化方法的优越性。

更新日期:2024-05-30
down
wechat
bug