当前位置: X-MOL 学术Vis. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transferable adversarial sample purification by expanding the purification space of diffusion models
The Visual Computer ( IF 3.0 ) Pub Date : 2024-02-13 , DOI: 10.1007/s00371-023-03253-7
Jun Ji , Song Gao , Wei Zhou

Deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial samples and many powerful defense methods have been proposed to enhance the adversarial robustness of DNNs. However, these defenses often require adding regularization terms to the loss function or augmenting the training data, which often involves modification of the target model and increases computational consumption. In this paper, we propose a novel adversarial defense approach that leverages the diffusion model with a large purification space to purify potential adversarial samples, and introduce two training strategies termed PSPG and PDPG to defend against different attacks. Our method preprocesses adversarial examples before they are inputted into the target model, and thus can provide protection for DNNs in the inference phase. It does not require modifications to the target model and can protect even deployed models. Extensive experiments on CIFAR-10 and ImageNet demonstrate that our method has good accuracy and transferability, it can provide protection effectively for different models in various defense scenarios. Our code is available at: https://github.com/YNU-JI/PDPG.

更新日期:2024-02-13
down
wechat
bug