当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Laserbeak: Evolving Website Fingerprinting Attacks With Attention and Multi-Channel Feature Representation
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2024-09-25 , DOI: 10.1109/tifs.2024.3468171
Nate Mathews, James K. Holland, Nicholas Hopper, Matthew Wright

In this paper, we present Laserbeak, a new state-of-the-art website fingerprinting attack for Tor that achieves nearly 96% accuracy against FRONT-defended traffic by combining two innovations: 1) multi-channel traffic representations and 2) advanced techniques adapted from state-of-the-art computer vision models. Our work is the first to explore a range of different ways to represent traffic data for a classifier. We find a multi-channel input format that provides richer contextual information, enabling the model to learn robust representations even in the presence of heavy traffic obfuscation. We are also the first to examine how recent advances in transformer models can take advantage of these representations. Our novel model architecture utilizing multi-headed attention layers enhances the capture of both local and global patterns. By combining these innovations, Laserbeak demonstrates absolute performance improvements of up to 36.2% (e.g., from 27.6% to 63.8%) compared with prior attacks against defended traffic. Experiments highlight Laserbeak’s capabilities in multiple scenarios, including a large open-world dataset where it achieves over 80% recall at 99% precision on traffic obfuscated with padding defenses. These advances reduce the remaining anonymity in Tor against fingerprinting threats, underscoring the need for stronger defenses.

中文翻译:


Laserbeak:不断发展的网站指纹识别攻击,具有注意力和多渠道特征表示



在本文中,我们介绍了 Laserbeak,这是一种针对 Tor 的新型最先进的网站指纹识别攻击,它通过结合两项创新来对抗 FRONT 防御流量,实现了近 96% 的准确率:1) 多通道流量表示和 2) 改编自最先进的计算机视觉模型的高级技术。我们的工作是首次探索了一系列不同的方法来表示分类器的流量数据。我们发现了一种多通道输入格式,它提供了更丰富的上下文信息,使模型即使在存在大量流量混淆的情况下也能学习稳健的表示。我们也是第一个研究 transformer 模型的最新进展如何利用这些表示的人。我们利用多头注意力层的新型模型架构增强了对局部和全局模式的捕获。通过结合这些创新,与之前针对防御流量的攻击相比,Laserbeak 的绝对性能提高了 36.2%(例如,从 27.6% 提高到 63.8%)。实验突出了 Laserbeak 在多个场景中的能力,包括一个大型开放世界数据集,在该数据集中,它以 99% 的精度实现了超过 80% 的召回率,并且通过填充防御进行了混淆。这些进步减少了 Tor 中针对指纹识别威胁的剩余匿名性,凸显了加强防御的必要性。
更新日期:2024-09-25
down
wechat
bug