当前位置: X-MOL 学术J. Cheminfom. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
PromptSMILES: prompting for scaffold decoration and fragment linking in chemical language models
Journal of Cheminformatics ( IF 7.1 ) Pub Date : 2024-07-04 , DOI: 10.1186/s13321-024-00866-5
Morgan Thomas 1 , Mazen Ahmad 2 , Gary Tresadern 2 , Gianni de Fabritiis 1, 3, 4
Affiliation  

SMILES-based generative models are amongst the most robust and successful recent methods used to augment drug design. They are typically used for complete de novo generation, however, scaffold decoration and fragment linking applications are sometimes desirable which requires a different grammar, architecture, training dataset and therefore, re-training of a new model. In this work, we describe a simple procedure to conduct constrained molecule generation with a SMILES-based generative model to extend applicability to scaffold decoration and fragment linking by providing SMILES prompts, without the need for re-training. In combination with reinforcement learning, we show that pre-trained, decoder-only models adapt to these applications quickly and can further optimize molecule generation towards a specified objective. We compare the performance of this approach to a variety of orthogonal approaches and show that performance is comparable or better. For convenience, we provide an easy-to-use python package to facilitate model sampling which can be found on GitHub and the Python Package Index. Scientific contribution This novel method extends an autoregressive chemical language model to scaffold decoration and fragment linking scenarios. This doesn’t require re-training, the use of a bespoke grammar, or curation of a custom dataset, as commonly required by other approaches.

中文翻译:


PromptSMILES:提示化学语言模型中的脚手架装饰和片段链接



基于 SMILES 的生成模型是最近用于增强药物设计的最强大和最成功的方法之一。它们通常用于完整的从头生成,但是,有时需要支架装饰和片段链接应用程序,这需要不同的语法、架构、训练数据集,因此需要重新训练新模型。在这项工作中,我们描述了一个简单的程序,使用基于 SMILES 的生成模型进行约束分子生成,通过提供 SMILES 提示来扩展支架装饰和片段连接的适用性,而无需重新训练。与强化学习相结合,我们表明预训练的仅解码器模型可以快速适应这些应用,并且可以进一步优化分子生成以实现特定目标。我们将这种方法的性能与各种正交方法进行比较,并表明性能相当或更好。为了方便起见,我们提供了一个易于使用的 python 包来促进模型采样,可以在 GitHub 和 Python 包索引上找到该包。科学贡献 这种新颖的方法将自回归化学语言模型扩展到支架装饰和片段连接场景。这不需要像其他方法通常要求的那样进行重新训练、使用定制语法或管理自定义数据集。
更新日期:2024-07-04
down
wechat
bug