当前位置: X-MOL 学术J. Cheminfom. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MolPROP: Molecular Property prediction with multimodal language and graph fusion
Journal of Cheminformatics ( IF 7.1 ) Pub Date : 2024-05-22 , DOI: 10.1186/s13321-024-00846-9
Zachary A Rollins 1 , Alan C Cheng 1 , Essam Metwally 1
Affiliation  

Pretrained deep learning models self-supervised on large datasets of language, image, and graph representations are often fine-tuned on downstream tasks and have demonstrated remarkable adaptability in a variety of applications including chatbots, autonomous driving, and protein folding. Additional research aims to improve performance on downstream tasks by fusing high dimensional data representations across multiple modalities. In this work, we explore a novel fusion of a pretrained language model, ChemBERTa-2, with graph neural networks for the task of molecular property prediction. We benchmark the MolPROP suite of models on seven scaffold split MoleculeNet datasets and compare with state-of-the-art architectures. We find that (1) multimodal property prediction for small molecules can match or significantly outperform modern architectures on hydration free energy (FreeSolv), experimental water solubility (ESOL), lipophilicity (Lipo), and clinical toxicity tasks (ClinTox), (2) the MolPROP multimodal fusion is predominantly beneficial on regression tasks, (3) the ChemBERTa-2 masked language model pretraining task (MLM) outperformed multitask regression pretraining task (MTR) when fused with graph neural networks for multimodal property prediction, and (4) despite improvements from multimodal fusion on regression tasks MolPROP significantly underperforms on some classification tasks. MolPROP has been made available at https://github.com/merck/MolPROP . This work explores a novel multimodal fusion of learned language and graph representations of small molecules for the supervised task of molecular property prediction. The MolPROP suite of models demonstrates that language and graph fusion can significantly outperform modern architectures on several regression prediction tasks and also provides the opportunity to explore alternative fusion strategies on classification tasks for multimodal molecular property prediction.

中文翻译:


MolPROP:使用多模态语言和图融合进行分子特性预测



在语言、图像和图形表示的大型数据集上进行自我监督的预训练深度学习模型通常会对下游任务进行微调,并在聊天机器人、自动驾驶和蛋白质折叠等各种应用中表现出卓越的适应性。其他研究旨在通过融合多种模式的高维数据表示来提高下游任务的性能。在这项工作中,我们探索了预训练语言模型 ChemBERTa-2 与图神经网络的新颖融合,用于分子特性预测任务。我们在七个脚手架分割 MoleculeNet 数据集上对 MolPROP 模型套件进行了基准测试,并与最先进的架构进行比较。我们发现(1)小分子的多模态性质预测在水合自由能(FreeSolv)、实验水溶性(ESOL)、亲脂性(Lipo)和临床毒性任务(ClinTox)方面可以匹配或显着优于现代架构,(2) MolPROP 多模态融合主要有益于回归任务,(3) 当与图神经网络融合进行多模态属性预测时,ChemBERTa-2 掩码语言模型预训练任务 (MLM) 的表现优于多任务回归预训练任务 (MTR),并且 (4) 尽管回归任务上多模态融合的改进 MolPROP 在某些分类任务上表现明显不佳。 MolPROP 已在 https://github.com/merck/MolPROP 上提供。这项工作探索了一种新颖的多模态融合,将学习的语言和小分子的图形表示相结合,用于分子特性预测的监督任务。 MolPROP 模型套件表明,语言和图融合在多个回归预测任务上可以显着优于现代架构,并且还提供了探索多模态分子属性预测分类任务的替代融合策略的机会。
更新日期:2024-05-23
down
wechat
bug