当前位置: X-MOL 学术IEEE Trans. Inform. Forensics Secur. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generating Location Traces With Semantic- Constrained Local Differential Privacy
IEEE Transactions on Information Forensics and Security ( IF 6.3 ) Pub Date : 2024-10-14 , DOI: 10.1109/tifs.2024.3480712
Xinyue Sun, Qingqing Ye, Haibo Hu, Jiawei Duan, Qiao Xue, Tianyu Wo, Weizhe Zhang, Jie Xu

Valuable information and knowledge can be learned from users’ location traces and support various location-based applications such as intelligent traffic control, incident response, and COVID-19 contact tracing. However, due to privacy concerns, no authority could simply collect users’ private location traces for mining or even publishing. To echo such concerns, local differential privacy (LDP) enables individual privacy by allowing each user to report a perturbed version of their data. Unfortunately, when applied to location traces, LDP cannot preserve the semantics in the context of location traces because it treats all locations (i.e., various points of interest) as equally sensitive. This results in a low utility of LDP mechanisms for collecting location traces. In this paper, we address the challenge of collecting and sharing location traces with valuable semantics while providing sufficient privacy protection for participating users. We first propose semantic-constrained local differential privacy (SLDP), a new privacy model to provide a provable mathematical privacy guarantee while preserving desirable semantics. Then, we design a location trace perturbation mechanism (LTPM) that users can use to perturb their traces in a way that satisfies SLDP. Finally, we propose a private location trace synthesis (PLTS) framework in which users use LTPM to perturb their traces before sending them to the collector, who aggregates the users’ perturbed data to generate location traces with valuable semantics. Extensive experiments on three real-world datasets demonstrate that our PLTS outperforms existing state-of-the-art methods by at least 21% in a range of real-world applications, such as spatial visiting queries and frequent pattern mining, under the same privacy leakage.

中文翻译:


生成具有语义约束的本地差分隐私的位置跟踪



可以从用户的位置追踪中学习有价值的信息和知识,并支持各种基于位置的应用程序,例如智能交通控制、事件响应和 COVID-19 接触者追踪。然而,出于隐私考虑,任何机构都不能简单地收集用户的私人位置痕迹进行挖掘甚至发布。为了回应这些担忧,本地差分隐私 (LDP) 允许每个用户报告其数据的 undisturb 版本,从而实现个人隐私。不幸的是,当应用于位置追踪时,LDP 无法在位置追踪的上下文中保留语义,因为它将所有位置(即各种兴趣点)视为同等敏感。这导致 LDP 机制用于收集位置追踪的效用较低。在本文中,我们解决了收集和共享具有有价值的语义的位置轨迹的挑战,同时为参与用户提供足够的隐私保护。我们首先提出了语义约束的局部差分隐私 (SLDP),这是一种新的隐私模型,可在保留所需语义的同时提供可证明的数学隐私保证。然后,我们设计了一种位置跟踪扰动机制 (LTPM),用户可以使用它来以满足 SLDP 的方式扰动他们的跟踪。最后,我们提出了一个私有位置跟踪合成 (PLTS) 框架,在该框架中,用户在将跟踪发送到收集器之前使用 LTPM 来扰动他们的跟踪,收集器聚合用户的扰动数据以生成具有有价值语义的位置跟踪。对三个真实世界数据集的广泛实验表明,在相同的隐私泄漏下,我们的 PLTS 在一系列真实应用中比现有的最先进方法至少高出 21%,例如空间访问查询和频繁的模式挖掘。
更新日期:2024-10-14
down
wechat
bug