当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Advancements in perception system with multi-sensor fusion for embodied agents
Information Fusion ( IF 14.7 ) Pub Date : 2024-12-11 , DOI: 10.1016/j.inffus.2024.102859
Hao Du, Lu Ren, Yuanda Wang, Xiang Cao, Changyin Sun

The multi-sensor data fusion perception technology, as a pivotal technique for achieving complex environmental perception and decision-making, has been garnering extensive attention from researchers. To date, there has been a lack of comprehensive review articles discussing the research progress of multi-sensor fusion perception systems for embodied agents, particularly in terms of analyzing the agent’s perception of itself and the surrounding scene. To address this gap and encourage further research, this study defines key terminology and analyzes datasets from the past two decades, focusing on advancements in multi-sensor fusion SLAM and multi-sensor scene perception. These key designs can aid researchers in gaining a better understanding of the field and initiating research in the domain of multi-sensor fusion perception for embodied agents. In this survey, we begin with a brief introduction to common sensor types and their characteristics. We then delve into the multi-sensor fusion perception datasets tailored for the domains of autonomous driving, drones, unmanned ground vehicles, and unmanned surface vehicles. Following this, we discuss the classification and fundamental principles of existing multi-sensor data fusion SLAM algorithms, and present the experimental outcomes of various classical fusion frameworks. Subsequently, we comprehensively review the technologies of multi-sensor data fusion scene perception, including object detection, semantic segmentation, instance segmentation, and panoramic understanding. Finally, we summarize our findings and discuss potential future developments in multi-sensor fusion perception technology.

中文翻译:


具身代理多传感器融合感知系统的进步



多传感器数据融合感知技术作为实现复杂环境感知和决策的关键技术,一直受到研究人员的广泛关注。迄今为止,一直缺乏讨论具身主体多传感器融合感知系统研究进展的综合综述文章,特别是在分析主体对自身和周围场景的感知方面。为了解决这一差距并鼓励进一步研究,本研究定义了关键术语并分析了过去二十年的数据集,重点关注多传感器融合 SLAM 和多传感器场景感知的进步。这些关键设计可以帮助研究人员更好地了解该领域,并在具身代理的多传感器融合感知领域开展研究。在本次调查中,我们首先简要介绍了常见的传感器类型及其特性。然后,我们深入研究了为自动驾驶、无人机、无人地面车辆和无人水面航行器领域量身定制的多传感器融合感知数据集。在此之后,我们讨论了现有多传感器数据融合 SLAM 算法的分类和基本原理,并介绍了各种经典融合框架的实验结果。随后,我们全面回顾了多传感器数据融合场景感知的技术,包括目标检测、语义分割、实例分割和全景理解。最后,我们总结了我们的发现并讨论了多传感器融合感知技术的潜在未来发展。
更新日期:2024-12-11
down
wechat
bug