当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
No-Box Universal Adversarial Perturbations Against Image Classifiers via Artificial Textures
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2024-10-11 , DOI: 10.1109/tifs.2024.3478828
Ningping Mou, Binqing Guo, Lingchen Zhao, Cong Wang, Yue Zhao, Qian Wang

Recent advancements in adversarial attack research have seen a transition from white-box to black-box and even no-box threat models, greatly enhancing the practicality of these attacks. However, existing no-box attacks focus on instance-specific perturbations, leaving more powerful universal adversarial perturbations (UAPs) unexplored. This study addresses a crucial question: can UAPs be generated under a no-box threat model? Our findings provide an affirmative answer with a texture-based method. Artificially crafted textures can act as UAPs, termed Texture-Adv. With a modest density and a fixed budget for perturbations, it can achieve an attack success rate of 80% under the constraint of $l_{\infty }$ = 10/255. In addition, Texture-Adv can also take effect under traditional black-box threat models. Building upon a phenomenon associated with dominant labels, we utilize Texture-Adv to develop a highly efficient decision-based attack strategy, named Adv-Pool. This approach creates and traverses a set of Texture-Adv instances with diverse classification distributions, significantly reducing the average query budget to less than 1.3, which is near the 1-query lower bound for decision-based attacks. Moreover, we empirically demonstrate that Texture-Adv, when used as a starting point, can enhance the success rates of existing transfer attacks and the efficiency of decision-based attacks. The discovery suggests its potential as an effective starting point for various adversarial attacks while preserving the original constraints of their threat models.

中文翻译:


通过人工纹理对图像分类器的 No-Box 通用对抗扰动



对抗性攻击研究的最新进展已经见证了从白盒到黑盒甚至无盒威胁模型的转变,大大增强了这些攻击的实用性。然而,现有的 no-box 攻击侧重于特定于实例的扰动,而没有探索更强大的通用对抗性扰动 (UAP)。本研究解决了一个关键问题:UAP 是否可以在无盒威胁模型下生成?我们的研究结果通过基于纹理的方法提供了一个肯定的答案。人工制作的纹理可以充当 UAP,称为 Texture-Adv。在适度的密度和固定的扰动预算下,它可以在 $l_{\infty }$ = 10/255 的约束下达到 80% 的攻击成功率。此外,Texture-Adv 也可以在传统的黑盒威胁模型下发挥作用。基于与显性标签相关的现象,我们利用 Texture-Adv 开发了一种高效的基于决策的攻击策略,名为 Adv-Pool。这种方法创建并遍历一组具有不同分类分布的 Texture-Adv 实例,从而将平均查询预算显著降低到 1.3 以下,这接近基于决策的攻击的 1 个查询的下限。此外,我们实证证明,当以 Texture-Adv 为起点时,可以提高现有转移攻击的成功率和基于决策的攻击的效率。这一发现表明,它有可能成为各种对抗性攻击的有效起点,同时保留其威胁模型的原始约束。
更新日期:2024-10-11
down
wechat
bug