当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
PathoDuet: Foundation models for pathological slide analysis of H&E and IHC stains
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-07-31 , DOI: 10.1016/j.media.2024.103289
Shengyi Hua 1 , Fang Yan 2 , Tianle Shen 1 , Lei Ma 3 , Xiaofan Zhang 4
Affiliation  

Large amounts of digitized histopathological data display a promising future for developing pathological foundation models via self-supervised learning methods. Foundation models pretrained with these methods serve as a good basis for downstream tasks. However, the gap between natural and histopathological images hinders the direct application of existing methods. In this work, we present PathoDuet, a series of pretrained models on histopathological images, and a new self-supervised learning framework in histopathology. The framework is featured by a newly-introduced pretext token and later task raisers to explicitly utilize certain relations between images, like multiple magnifications and multiple stains. Based on this, two pretext tasks, cross-scale positioning and cross-stain transferring, are designed to pretrain the model on Hematoxylin and Eosin (H&E) images and transfer the model to immunohistochemistry (IHC) images, respectively. To validate the efficacy of our models, we evaluate the performance over a wide variety of downstream tasks, including patch-level colorectal cancer subtyping and whole slide image (WSI)-level classification in H&E field, together with expression level prediction of IHC marker, tumor identification and slide-level qualitative analysis in IHC field. The experimental results show the superiority of our models over most tasks and the efficacy of proposed pretext tasks. The codes and models are available at https://github.com/openmedlab/PathoDuet.

中文翻译:


PathoDuet:H&E 和 IHC 染色病理切片分析的基础模型



大量数字化组织病理学数据显示了通过自我监督学习方法开发病理基础模型的广阔前景。使用这些方法预训练的基础模型可以为下游任务奠定良好的基础。然而,自然图像和组织病理学图像之间的差距阻碍了现有方法的直接应用。在这项工作中,我们提出了 PathoDuet,一系列组织病理学图像的预训练模型,以及组织病理学中新的自我监督学习框架。该框架的特点是新引入的借口标记和后来的任务提升器,以明确利用图像之间的某些关系,例如多个放大倍数和多个污点。在此基础上,设计了两个借口任务:跨尺度定位和跨染色转移,分别在苏木精和伊红(H&E)图像上预训练模型,并将模型转移到免疫组织化学(IHC)图像。为了验证我们模型的功效,我们评估了各种下游任务的性能,包括 H&E 领域的斑块级结直肠癌亚型和全幻灯片图像 (WSI) 级分类,以及 IHC 标记物的表达水平预测, IHC 领域的肿瘤识别和玻片级定性分析。实验结果表明我们的模型相对于大多数任务的优越性以及所提出的借口任务的有效性。代码和模型可在 https://github.com/openmedlab/PathoDuet 获取。
更新日期:2024-07-31
down
wechat
bug