当前位置: X-MOL 学术Artif. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A differentiable first-order rule learner for inductive logic programming
Artificial Intelligence ( IF 14.4 ) Pub Date : 2024-03-15 , DOI: 10.1016/j.artint.2024.104108
Kun Gao , Katsumi Inoue , Yongzhi Cao , Hanpin Wang

Learning first-order logic programs from relational facts yields intuitive insights into the data. Inductive logic programming (ILP) models are effective in learning first-order logic programs from observed relational data. Symbolic ILP models support rule learning in a data-efficient manner. However, symbolic ILP models are not robust to learn from noisy data. Neuro-symbolic ILP models utilize neural networks to learn logic programs in a differentiable manner which improves the robustness of ILP models. However, most neuro-symbolic methods need a strong language bias to learn logic programs, which reduces the usability and flexibility of ILP models and limits the logic program formats. In addition, most neuro-symbolic ILP methods cannot learn logic programs effectively from both small-size datasets and large-size datasets such as knowledge graphs. In the paper, we introduce a novel differentiable ILP model called differentiable first-order rule learner (DFORL), which is scalable to learn rules from both smaller and larger datasets. Besides, DFORL only needs the number of variables in the learned logic programs as input. Hence, DFORL is easy to use and does not need a strong language bias. We demonstrate that DFORL can perform well on several standard ILP datasets, knowledge graphs, and probabilistic relation facts and outperform several well-known differentiable ILP models. Experimental results indicate that DFORL is a precise, robust, scalable, and computationally cheap differentiable ILP model.

中文翻译:

用于归纳逻辑编程的可微一阶规则学习器

从关系事实中学习一阶逻辑程序可以产生对数据的直观洞察。归纳逻辑编程(ILP)模型可以有效地从观察到的关系数据学习一阶逻辑程序。符号 ILP 模型以数据高效的方式支持规则学习。然而,符号 ILP 模型对于从噪声数据中学习并不稳健。神经符号 ILP 模型利用神经网络以可微分的方式学习逻辑程序,从而提高了 ILP 模型的鲁棒性。然而,大多数神经符号方法需要强烈的语言偏差来学习逻辑程序,这降低了ILP模型的可用性和灵活性,并限制了逻辑程序格式。此外,大多数神经符号ILP方法无法从小型数据集和知识图等大型数据集有效地学习逻辑程序。在本文中,我们介绍了一种称为可微分一阶规则学习器(DFORL)的新型可微 ILP 模型,该模型可扩展以从较小和较大的数据集中学习规则。此外,DFORL只需要学习到的逻辑程序中的变量数量作为输入。因此,DFORL 易于使用,并且不需要强烈的语言偏差。我们证明 DFORL 可以在几个标准 ILP 数据集、知识图和概率关系事实上表现良好,并且优于几个众所周知的可微 ILP 模型。实验结果表明,DFORL 是一种精确、稳健、可扩展且计算成本低的可微 ILP 模型。
更新日期:2024-03-15
down
wechat
bug