当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Semantic Deep Hiding for Robust Unlearnable Examples
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2024-07-01 , DOI: 10.1109/tifs.2024.3421273
Ruohan Meng 1 , Chenyu Yi 1 , Yi Yu 1 , Siyuan Yang 1 , Bingquan Shen 2 , Alex C. Kot 1
Affiliation  

Ensuring data privacy and protection has become paramount in the era of deep learning. Unlearnable examples are proposed to mislead the deep learning models and prevent data from unauthorized exploration by adding small perturbations to data. However, such perturbations (e.g., noise, texture, color change) predominantly impact low-level features, making them vulnerable to common countermeasures. In contrast, semantic images with intricate shapes have a wealth of high-level features, making them more resilient to countermeasures and potential for producing robust unlearnable examples. In this paper, we propose a Deep Hiding (DH) scheme that adaptively hides semantic images enriched with high-level features. We employ an Invertible Neural Network (INN) to invisibly integrate predefined images, inherently hiding them with deceptive perturbations. To enhance data unlearnability, we introduce a Latent Feature Concentration module, designed to work with the INN, regularizing the intra-class variance of these perturbations. To further boost the robustness of unlearnable examples, we design a Semantic Images Generation module that produces hidden semantic images. By utilizing similar semantic information, this module generates similar semantic images for samples within the same classes, thereby enlarging the inter-class distance and narrowing the intra-class distance. Extensive experiments on CIFAR-10, CIFAR-100, and an ImageNet subset, against 18 countermeasures, reveal that our proposed method exhibits outstanding robustness for unlearnable examples, demonstrating its efficacy in preventing unauthorized data exploitation.

中文翻译:


语义深度隐藏鲁棒的不可学习的例子



确保数据隐私和保护在深度学习时代变得至关重要。提出不可学习的例子是为了误导深度学习模型,并通过向数据添加小的扰动来防止数据受到未经授权的探索。然而,此类扰动(例如噪声、纹理、颜色变化)主要影响低级特征,使它们容易受到常见对策的影响。相比之下,具有复杂形状的语义图像具有丰富的高级特征,使它们更能适应对策,并有可能产生强大的、不可学习的示例。在本文中,我们提出了一种深度隐藏(DH)方案,该方案自适应地隐藏富含高级特征的语义图像。我们采用可逆神经网络(INN)来无形地集成预定义图像,本质上通过欺骗性扰动隐藏它们。为了增强数据的不可学习性,我们引入了一个潜在特征集中模块,旨在与 INN 配合使用,规范这些扰动的类内方差。为了进一步提高不可学习示例的鲁棒性,我们设计了一个语义图像生成模块,可以生成隐藏的语义图像。该模块利用相似的语义信息,为同一类内的样本生成相似的语义图像,从而扩大类间距离,缩小类内距离。针对 18 种对策,在 CIFAR-10、CIFAR-100 和 ImageNet 子集上进行的大量实验表明,我们提出的方法对不可学习的示例表现出出色的鲁棒性,证明了其在防止未经授权的数据利用方面的有效性。
更新日期:2024-07-01
down
wechat
bug