当前位置:
X-MOL 学术
›
IEEE Trans. Image Process.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
EviPrompt: A Training-Free Evidential Prompt Generation Method for Adapting Segment Anything Model in Medical Images
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2024-10-22 , DOI: 10.1109/tip.2024.3482175 Yinsong Xu, Jiaqi Tang, Aidong Men, Qingchao Chen
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2024-10-22 , DOI: 10.1109/tip.2024.3482175 Yinsong Xu, Jiaqi Tang, Aidong Men, Qingchao Chen
Medical image segmentation is a critical task in clinical applications. Recently, the Segment Anything Model (SAM) has demonstrated potential for natural image segmentation. However, the requirement for expert labour to provide prompts, and the domain gap between natural and medical images pose significant obstacles in adapting SAM to medical images. To overcome these challenges, this paper introduces a novel prompt generation method named EviPrompt. The proposed method requires only a single reference image-annotation pair, making it a training-free solution that significantly reduces the need for extensive labelling and computational resources. First, prompts are automatically generated based on the similarity between features of the reference and target images, and evidential learning is introduced to improve reliability. Then, to mitigate the impact of the domain gap, committee voting and inference-guided in-context learning are employed, generating prompts primarily based on human prior knowledge and reducing reliance on extracted semantic information. EviPrompt represents an efficient and robust approach to medical image segmentation. We evaluate it across a broad range of tasks and modalities, confirming its efficacy. The source code is available at https://github.com/SPIresearch/EviPrompt
.
中文翻译:
EviPrompt:一种用于在医学图像中适应 Segment Anything 模型的免训练证据提示生成方法
医学图像分割是临床应用中的一项关键任务。最近,Segment Anything Model (SAM) 展示了自然图像分割的潜力。然而,需要专家劳动力提供提示的要求,以及自然图像和医学图像之间的领域差距,对 SAM 适应医学图像构成了重大障碍。为了克服这些挑战,本文介绍了一种名为 EviPrompt 的新型提示生成方法。所提出的方法只需要一个参考图像-注释对,使其成为一种无需训练的解决方案,大大减少了对大量标记和计算资源的需求。首先,根据参考图像和目标图像特征的相似性自动生成提示,并引入证据学习以提高可靠性;然后,为了减轻领域差距的影响,采用委员会投票和推理指导的上下文学习,主要基于人类先验知识生成提示,并减少对提取的语义信息的依赖。EviPrompt 代表了一种高效而强大的医学图像分割方法。我们在广泛的任务和方式中对其进行评估,确认其疗效。源代码可在 https://github.com/SPIresearch/EviPrompt 获取。
更新日期:2024-10-22
中文翻译:
EviPrompt:一种用于在医学图像中适应 Segment Anything 模型的免训练证据提示生成方法
医学图像分割是临床应用中的一项关键任务。最近,Segment Anything Model (SAM) 展示了自然图像分割的潜力。然而,需要专家劳动力提供提示的要求,以及自然图像和医学图像之间的领域差距,对 SAM 适应医学图像构成了重大障碍。为了克服这些挑战,本文介绍了一种名为 EviPrompt 的新型提示生成方法。所提出的方法只需要一个参考图像-注释对,使其成为一种无需训练的解决方案,大大减少了对大量标记和计算资源的需求。首先,根据参考图像和目标图像特征的相似性自动生成提示,并引入证据学习以提高可靠性;然后,为了减轻领域差距的影响,采用委员会投票和推理指导的上下文学习,主要基于人类先验知识生成提示,并减少对提取的语义信息的依赖。EviPrompt 代表了一种高效而强大的医学图像分割方法。我们在广泛的任务和方式中对其进行评估,确认其疗效。源代码可在 https://github.com/SPIresearch/EviPrompt 获取。