当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Toward Robust Adversarial Purification for Face Recognition Under Intensity-Unknown Attacks
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2024-10-03 , DOI: 10.1109/tifs.2024.3473293
Keyizhi Xu, Zhan Chen, Zhongyuan Wang, Chunxia Xiao, Chao Liang

Recent years have witnessed dramatic progress in adversarial attacks, which can easily mislead face recognition systems via the injection of imperceptible perturbations on the input image. Many defense methods have been proposed to mitigate the detrimental impact of adversarial attacks, including adversarial purification which intends to reconstruct clean images through a generative model. This paper studies a more practical and challenging problem: how to defend face recognition systems against intensity-unknown or even intensity-varying adversarial attacks? We attempt to crack this tough nut from the dimensionality of input resolutions. Looking into the performance of purification methods with various input resolutions, we reveal a phenomenon that, higher-resolution input images help better defend against weaker attacks, while lower-resolution ones are naturally defensive against stronger attacks. It inspires us to design an adaptive purification framework under intensity-unknown attacks, dubbed adversarial Intensity-guided Multi-scale Attention (IMA). Via the aggregation of information from different resolution scales and flexible adjustment according to an estimation of adversarial intensity, it leverages the respective advantages of different scales and constructs a robust ensemble against intensity-unknown attacks. We validate the superiority of IMA by defending against both face obfuscation and impersonation of 9 typical attack algorithms under gray-box, white-box and black-box evaluation, outperforming state-of-the-art defense methods on LFW and YTF datasets.

中文翻译:


迈向强度未知攻击下人脸识别的稳健对抗性净化



近年来,对抗性攻击取得了巨大进展,通过在输入图像上注入难以察觉的扰动,很容易误导人脸识别系统。已经提出了许多防御方法来减轻对抗性攻击的不利影响,包括对抗性净化,它旨在通过生成模型重建干净的图像。本文研究了一个更实际和更具挑战性的问题:如何保护人脸识别系统免受强度未知甚至强度变化的对抗攻击?我们试图从输入分辨率的维度中破解这个棘手的难题。通过研究具有各种输入分辨率的纯化方法的性能,我们揭示了一个现象,即高分辨率的输入图像有助于更好地防御较弱的攻击,而较低分辨率的输入图像自然而然地防御较强的攻击。它激发了我们在强度未知攻击下设计一个自适应净化框架,称为对抗性强度引导多尺度注意力 (IMA)。通过聚合不同分辨率尺度的信息并根据对阵强度的估计进行灵活调整,它利用了不同尺度的各自优势,构建了抵御强度未知攻击的鲁棒系团。我们通过在灰盒、白盒和黑盒评估下防御 9 种典型攻击算法的人脸混淆和冒充来验证 IMA 的优越性,在 LFW 和 YTF 数据集上优于最先进的防御方法。
更新日期:2024-10-03
down
wechat
bug