Nature Machine Intelligence ( IF 18.8 ) Pub Date : 2024-11-27 , DOI: 10.1038/s42256-024-00930-7 Janghoon Ock, Srivathsan Badrinarayanan, Rishikesh Magar, Akshay Antony, Amir Barati Farimani
Adsorption energy is a reactivity descriptor that must be accurately predicted for effective machine learning application in catalyst screening. This process involves finding the lowest energy among different adsorption configurations on a catalytic surface, which often have very similar energies. Although graph neural networks have shown great success in computing the energy of catalyst systems, they rely heavily on atomic spatial coordinates. By contrast, transformer-based language models can directly use human-readable text inputs, potentially bypassing the need for detailed atomic positions or topology; however, these language models often struggle with accurately predicting the energy of adsorption configurations. Our study improves the predictive language model by aligning its latent space with well-established graph neural networks through a self-supervised process called graph-assisted pretraining. This method reduces the mean absolute error of energy prediction for adsorption configurations by 7.4–9.8%, redirecting the model’s attention towards adsorption configuration. Building on this, we propose using generative large language models to create text inputs for the predictive model without relying on exact atomic positions. This demonstrates a potential use case of language models in energy prediction without detailed geometric information.
中文翻译:
催化吸附构型的多模态语言和图学习
吸附能是一种反应性描述符,必须准确预测才能在催化剂筛选中有效地进行机器学习应用。这个过程涉及在催化表面的不同吸附配置中找到最低的能量,这些吸附配置通常具有非常相似的能量。尽管图神经网络在计算催化剂系统的能量方面取得了巨大成功,但它们在很大程度上依赖于原子空间坐标。相比之下,基于 transformer 的语言模型可以直接使用人类可读的文本输入,从而可能绕过对详细原子位置或拓扑的需求;然而,这些语言模型通常难以准确预测吸附配置的能量。我们的研究通过称为图辅助预训练的自监督过程,将其潜在空间与成熟的图神经网络对齐,从而改进了预测语言模型。这种方法将吸附构型的能量预测的平均绝对误差降低了 7.4-9.8%,将模型的注意力转移到吸附构型上。在此基础上,我们建议使用生成式大型语言模型为预测模型创建文本输入,而不依赖于精确的原子位置。这展示了语言模型在能量预测中的潜在用例,而无需详细的几何信息。