当前位置: X-MOL 学术Med. Image Anal. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Foundation Language-Image Model of the Retina (FLAIR): encoding expert knowledge in text supervision
Medical Image Analysis ( IF 10.7 ) Pub Date : 2024-10-01 , DOI: 10.1016/j.media.2024.103357
Julio Silva-Rodríguez, Hadi Chakor, Riadh Kobbi, Jose Dolz, Ismail Ben Ayed

Foundation vision-language models are currently transforming computer vision, and are on the rise in medical imaging fueled by their very promising generalization capabilities. However, the initial attempts to transfer this new paradigm to medical imaging have shown less impressive performances than those observed in other domains, due to the significant domain shift and the complex, expert domain knowledge inherent to medical-imaging tasks. Motivated by the need for domain-expert foundation models, we present FLAIR, a pre-trained vision-language model for universal retinal fundus image understanding. To this end, we compiled 38 open-access, mostly categorical fundus imaging datasets from various sources, with up to 101 different target conditions and 288,307 images. We integrate the expert’s domain knowledge in the form of descriptive textual prompts, during both pre-training and zero-shot inference, enhancing the less-informative categorical supervision of the data. Such a textual expert’s knowledge, which we compiled from the relevant clinical literature and community standards, describes the fine-grained features of the pathologies as well as the hierarchies and dependencies between them. We report comprehensive evaluations, which illustrate the benefit of integrating expert knowledge and the strong generalization capabilities of FLAIR under difficult scenarios with domain shifts or unseen categories. When adapted with a lightweight linear probe, FLAIR outperforms fully-trained, dataset-focused models, more so in the few-shot regimes. Interestingly, FLAIR outperforms by a wide margin larger-scale generalist image-language models and retina domain-specific self-supervised networks, which emphasizes the potential of embedding experts’ domain knowledge and the limitations of generalist models in medical imaging. The pre-trained model is available at: https://github.com/jusiro/FLAIR.

中文翻译:


视网膜的基础语言图像模型 (FLAIR):编码文本监督中的专业知识



Foundation 视觉语言模型目前正在改变计算机视觉,并且在其非常有前途的泛化能力的推动下,医学成像领域正在崛起。然而,由于医学成像任务固有的重大领域转变和复杂的专家领域知识,将这种新范式转移到医学成像的最初尝试所显示的性能不如在其他领域观察到的那么令人印象深刻。在对领域专家基础模型的需求的推动下,我们提出了 FLAIR,这是一种用于通用视网膜眼底图像理解的预训练视觉语言模型。为此,我们编译了来自各种来源的 38 个开放获取的、主要是分类眼底成像数据集,其中包含多达 101 种不同的目标条件和 288,307 张图像。在预训练和零样本推理期间,我们以描述性文本提示的形式整合了专家的领域知识,从而增强了对数据信息量较少的分类监督。我们从相关临床文献和社区标准中汇编的此类文本专家知识描述了病理学的细粒度特征以及它们之间的层次结构和依赖关系。我们报告了全面的评估,这些评估说明了在涉及领域转移或看不见的类别的困难情况下整合专业知识和 FLAIR 强大的泛化能力的好处。当使用轻量级线性探针进行调整时,FLAIR 的性能优于经过全面训练、以数据集为中心的模型,在少数镜头范围内更是如此。 有趣的是,FLAIR 的表现远远优于更大规模的通才图像语言模型和视网膜领域特定的自我监督网络,这强调了嵌入专家领域知识的潜力和通才模型在医学成像中的局限性。预训练模型可在以下网址获得:https://github.com/jusiro/FLAIR。
更新日期:2024-10-01
down
wechat
bug