当前位置: X-MOL 学术IEEE Trans. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploiting Trust for Resilient Hypothesis Testing With Malicious Robots
IEEE Transactions on Robotics ( IF 9.4 ) Pub Date : 2024-06-17 , DOI: 10.1109/tro.2024.3415235
Matthew Cavorsi 1 , Orhan Eren Akgün 1 , Michal Yemini 2 , Andrea J. Goldsmith 3 , Stephanie Gil 1
Affiliation  

In this article, we develop a resilient binary hypothesis testing framework for decision making in adversarial multirobot crowdsensing tasks. This framework exploits stochastic trust observations between robots to arrive at tractable, resilient decision making at a centralized fusion center (FC) even when, first, there exist malicious robots in the network and their number may be larger than the number of legitimate robots, and second, the FC uses one-shot noisy measurements from all robots. We derive two algorithms to achieve this. The first is the two-stage approach (2SA) that estimates the legitimacy of robots based on received trust observations, and provably minimizes the probability of detection error in the worst-case malicious attack. For the 2SA, we assume that the proportion of malicious robots is known but arbitrary. For the case of an unknown proportion of malicious robots, we develop the adversarial generalized likelihood ratio test (A-GLRT) that uses both the reported robot measurements and trust observations to simultaneously estimate the trustworthiness of robots, their reporting strategy, and the correct hypothesis. We exploit particular structures in the problem to show that this approach remains computationally tractable even with unknown problem parameters. We deploy both algorithms in a hardware experiment where a group of robots conducts crowdsensing of traffic conditions subject to a Sybil attack on a mock-up road network. We extract the trust observations for each robot from communication signals, which provide statistical information on the uniqueness of the sender. We show that even when the malicious robots are in the majority, the FC can reduce the probability of detection error to 30.5% and 29% for the 2SA and the A-GLRT algorithms, respectively.

中文翻译:


利用信任对恶意机器人进行弹性假设检验



在本文中,我们开发了一个弹性二元假设检验框架,用于对抗性多机器人群体感知任务中的决策。该框架利用机器人之间的随机信任观察来在集中式融合中心(FC)做出易于处理的、有弹性的决策,即使首先网络中存在恶意机器人并且其数量可能大于合法机器人的数量,并且其次,FC 使用所有机器人的一次性噪声测量。我们推导出两种算法来实现这一目标。第一种是两阶段方法(2SA),它根据收到的信任观察来估计机器人的合法性,并可证明最大限度地减少最坏情况恶意攻击中检测错误的概率。对于 2SA,我们假设恶意机器人的比例是已知的,但是任意的。对于恶意机器人比例未知的情况,我们开发了对抗性广义似然比测试(A-GLRT),该测试使用报告的机器人测量值和信任观察值来同时估计机器人的可信度、报告策略和正确的假设。我们利用问题中的特定结构来表明,即使问题参数未知,这种方法在计算上仍然易于处理。我们在硬件实验中部署了这两种算法,其中一组机器人对模拟道路网络上遭受 Sybil 攻击的交通状况进行群体感知。我们从通信信号中提取每个机器人的信任观察,这提供了有关发送者唯一性的统计信息。我们证明,即使恶意机器人占多数,FC 也可以将检测错误的概率降低到 30。2SA 和 A-GLRT 算法分别为 5% 和 29%。
更新日期:2024-06-17
down
wechat
bug