当前位置: X-MOL 学术ACM Comput. Surv. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explaining the Explainers in Graph Neural Networks: a Comparative Study
ACM Computing Surveys ( IF 23.8 ) Pub Date : 2024-09-24 , DOI: 10.1145/3696444
Antonio Longa, Steve Azzolin, Gabriele Santin, Giulia Cencetti, Pietro Lio, Bruno Lepri, Andrea Passerini

Following a fast initial breakthrough in graph based learning, Graph Neural Networks (GNNs) have reached a widespread application in many science and engineering fields, prompting the need for methods to understand their decision process. GNN explainers have started to emerge in recent years, with a multitude of methods both novel or adapted from other domains. To sort out this plethora of alternative approaches, several studies have benchmarked the performance of different explainers in terms of various explainability metrics. However, these earlier works make no attempts at providing insights into why different GNN architectures are more or less explainable, or which explainer should be preferred in a given setting. In this survey we fill these gaps by devising a systematic experimental study, which tests twelve explainers on eight representative message-passing architectures trained on six carefully designed graph and node classification datasets. With our results we provide key insights on the choice and applicability of GNN explainers, we isolate key components that make them usable and successful and provide recommendations on how to avoid common interpretation pitfalls. We conclude by highlighting open questions and directions of possible future research.

中文翻译:


解释图神经网络中的解释器:一项比较研究



在基于图的学习方面取得初步的快速突破之后,图神经网络 (GNN) 已在许多科学和工程领域得到广泛应用,这促使需要了解其决策过程的方法。近年来,GNN 解释器开始出现,有许多方法,既有新颖的,也有改编自其他领域的。为了理清这些过多的替代方法,一些研究根据各种可解释性指标对不同解释器的性能进行了基准测试。然而,这些早期的工作并没有试图提供关于为什么不同的 GNN 架构或多或少可以解释的见解,或者在给定的环境中应该首选哪个解释器。在这项调查中,我们通过设计一项系统的实验研究来填补这些空白,该研究在 8 个代表性消息传递架构上测试了 12 个解释器,这些架构在 6 个精心设计的图形和节点分类数据集上进行了训练。通过我们的结果,我们提供了关于 GNN 解释器的选择和适用性的关键见解,我们分离了使其可用和成功的关键组成部分,并就如何避免常见的解释陷阱提供建议。最后,我们强调了未来可能研究的开放性问题和方向。
更新日期:2024-09-24
down
wechat
bug