当前位置: X-MOL 学术npj Digit. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Privacy enhancing and generalizable deep learning with synthetic data for mediastinal neoplasm diagnosis
npj Digital Medicine ( IF 12.4 ) Pub Date : 2024-10-20 , DOI: 10.1038/s41746-024-01290-7
Zhanping Zhou, Yuchen Guo, Ruijie Tang, Hengrui Liang, Jianxing He, Feng Xu

The success of deep learning (DL) relies heavily on training data from which DL models encapsulate information. Consequently, the development and deployment of DL models expose data to potential privacy breaches, which are particularly critical in data-sensitive contexts like medicine. We propose a new technique named DiffGuard that generates realistic and diverse synthetic medical images with annotations, even indistinguishable for experts, to replace real data for DL model training, which cuts off their direct connection and enhances privacy safety. We demonstrate that DiffGuard enhances privacy safety with much less data leakage and better resistance against privacy attacks on data and model. It also improves the accuracy and generalizability of DL models for segmentation and classification of mediastinal neoplasms in multi-center evaluation. We expect that our solution would enlighten the road to privacy-preserving DL for precision medicine, promote data and model sharing, and inspire more innovation on artificial-intelligence-generated-content technologies for medicine.



中文翻译:


使用合成数据进行隐私增强和可推广的深度学习,用于纵隔肿瘤诊断



深度学习 (DL) 的成功在很大程度上取决于 DL 模型封装信息的训练数据。因此,DL 模型的开发和部署使数据面临潜在的隐私泄露风险,这在医学等数据敏感环境中尤为重要。我们提出了一种名为 DiffGuard 的新技术,它可以生成逼真且多样化的合成医学图像,并带有注释,甚至对专家来说无法区分,以替换真实数据进行 DL 模型训练,从而切断它们的直接联系并增强隐私安全性。我们证明,DiffGuard 通过减少数据泄漏和更好地抵抗对数据和模型的隐私攻击来增强隐私安全性。它还提高了 DL 模型在多中心评估中用于纵隔肿瘤分割和分类的准确性和泛化性。我们期望我们的解决方案能够为精准医疗的隐私保护 DL 之路开辟道路,促进数据和模型共享,并激发更多人工智能生成内容医学技术的创新。

更新日期:2024-10-20
down
wechat
bug