当前位置: X-MOL 学术 › Proc IEEE Int Conf Comput Vis › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Fixed Points in Generative Adversarial Networks: From Image-to-Image Translation to Disease Detection and Localization.
Proceedings. IEEE International Conference on Computer Vision Pub Date : 2020-02-27 , DOI: 10.1109/iccv.2019.00028
Md Mahfuzur Rahman Siddiquee 1 , Zongwei Zhou 1, 2 , Nima Tajbakhsh 1 , Ruibin Feng 1 , Michael B Gotway 3 , Yoshua Bengio 2 , Jianming Liang 1, 2
Affiliation  

Generative adversarial networks (GANs) have ushered in a revolution in image-to-image translation. The development and proliferation of GANs raises an interesting question: can we train a GAN to remove an object, if present, from an image while otherwise preserving the image? Specifically, can a GAN "virtually heal" anyone by turning his medical image, with an unknown health status (diseased or healthy), into a healthy one, so that diseased regions could be revealed by subtracting those two images? Such a task requires a GAN to identify a minimal subset of target pixels for domain translation, an ability that we call fixed-point translation, which no GAN is equipped with yet. Therefore, we propose a new GAN, called Fixed-Point GAN, trained by (1) supervising same-domain translation through a conditional identity loss, and (2) regularizing cross-domain translation through revised adversarial, domain classification, and cycle consistency loss. Based on fixed-point translation, we further derive a novel framework for disease detection and localization using only image-level annotation. Qualitative and quantitative evaluations demonstrate that the proposed method outperforms the state of the art in multi-domain image-to-image translation and that it surpasses predominant weakly-supervised localization methods in both disease detection and localization. Implementation is available at https://github.com/jlianglab/Fixed-Point-GAN.

中文翻译:

在生成对抗网络中学习固定点:从图像到图像转换到疾病检测和定位。

生成对抗网络(GANs)引发了图像到图像翻译的革命。GAN的发展和扩散提出了一个有趣的问题:我们是否可以训练GAN从图像中移除对象(如果存在),同时保留图像呢?具体地说,GAN是否可以通过将健康状况未知(患病或健康)的医学图像转换为健康图像来“虚拟治愈”任何人,以便通过减去这两个图像来显示患病区域?这样的任务需要GAN识别用于域转换的目标像素的最小子集,这种功能我们称为定点转换,而GAN尚不具备此功能。因此,我们提出了一种称为固定点GAN的新GAN,该GAN通过以下方式训练:(1)通过有条件的身份丢失监督同域翻译 (2)通过修改对抗性,领域分类和周期一致性损失来规范跨域翻译。基于定点翻译,我们进一步推导了仅使用图像级注释进行疾病检测和定位的新颖框架。定性和定量评估表明,该方法在多域图像到图像的翻译中优于现有技术,并且在疾病检测和定位方面都超过了主要的弱监督定位方法。可以从https://github.com/jlianglab/Fixed-Point-GAN获得实现。我们进一步推导了仅使用图像级注释进行疾病检测和定位的新颖框架。定性和定量评估表明,该方法在多域图像到图像的翻译中优于现有技术,并且在疾病检测和定位方面都超过了主要的弱监督定位方法。可以从https://github.com/jlianglab/Fixed-Point-GAN获得实现。我们进一步推导了仅使用图像级注释进行疾病检测和定位的新颖框架。定性和定量评估表明,该方法在多域图像到图像的翻译中优于现有技术,并且在疾病检测和定位方面都超过了主要的弱监督定位方法。可以从https://github.com/jlianglab/Fixed-Point-GAN获得实现。
更新日期:2020-02-27
down
wechat
bug