当前位置:
X-MOL 学术
›
Inform. Fusion
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
A LiDAR-depth camera information fusion method for human robot collaboration environment
Information Fusion ( IF 14.7 ) Pub Date : 2024-09-26 , DOI: 10.1016/j.inffus.2024.102717 Zhongkang Wang, Pengcheng Li, Qi Zhang, Longhui Zhu, Wei Tian
Information Fusion ( IF 14.7 ) Pub Date : 2024-09-26 , DOI: 10.1016/j.inffus.2024.102717 Zhongkang Wang, Pengcheng Li, Qi Zhang, Longhui Zhu, Wei Tian
With the evolution of human–robot collaboration in advanced manufacturing, multisensor integration has increasingly become a critical component for ensuring safety during human–robot interactions. Given the disparities in range scales, densities, and arrangement patterns among multisensor data, such as that from depth cameras and LiDAR, accurately fusing information from multiple sources has emerged as a pressing need to safeguard human–robot safety. This paper focuses on LiDAR and depth cameras, addressing the challenges posed by the differences in data collection range, point density, and distribution patterns which complicate information fusion. We propose a heterogeneous sensor information fusion method for human–robot collaborative environments. To solve the problem of substantial differences in point cloud range scales, a moving sphere space coarse localization algorithm is introduced, narrowing down the scale of interest based on similar features. Furthermore, to address the challenge of significant density differences and low overlap rates between point clouds, we present an improved FPFH coarse registration algorithm based on overlap ratio and an enhanced ICP fine registration algorithm based on the generation of corresponding points. The method proposed herein is applied to the fusion of information from a 64-line LiDAR and a depth camera within a human–robot collaboration scene. Experimental results demonstrate an absolute translational accuracy of 4.29 cm and an absolute rotational accuracy of 0.006 rad, meeting the requirements for heterogeneous sensor information fusion in the context of human–robot collaboration.
中文翻译:
一种面向人机协作环境的 LiDAR 深度相机信息融合方法
随着先进制造业中人机协作的发展,多传感器集成已日益成为确保人机交互安全的关键组件。鉴于多传感器数据(例如来自深度相机和 LiDAR 的数据)在距离尺度、密度和排列模式方面的差异,准确融合来自多个来源的信息已成为保障人机安全的迫切需求。本文重点介绍 LiDAR 和深度相机,解决了数据收集范围、点密度和分布模式差异带来的挑战,这些挑战使信息融合复杂化。我们提出了一种用于人机协作环境的异构传感器信息融合方法。为了解决点云距离尺度存在实质性差异的问题,该文引入一种移动球体空间粗略定位算法,在相似特征的基础上缩小感兴趣尺度。此外,为了解决点云之间密度差异显著和重叠率低的挑战,我们提出了一种基于重叠率的改进 FPFH 粗配算法和基于相应点生成的增强型 ICP 精细配准算法。本文提出的方法应用于人机协作场景中来自 64 线 LiDAR 和深度相机的信息融合。实验结果表明,绝对平移精度为 4.29 cm,绝对旋转精度为 0.006 rad,满足人机协作背景下异构传感器信息融合的要求。
更新日期:2024-09-26
中文翻译:
一种面向人机协作环境的 LiDAR 深度相机信息融合方法
随着先进制造业中人机协作的发展,多传感器集成已日益成为确保人机交互安全的关键组件。鉴于多传感器数据(例如来自深度相机和 LiDAR 的数据)在距离尺度、密度和排列模式方面的差异,准确融合来自多个来源的信息已成为保障人机安全的迫切需求。本文重点介绍 LiDAR 和深度相机,解决了数据收集范围、点密度和分布模式差异带来的挑战,这些挑战使信息融合复杂化。我们提出了一种用于人机协作环境的异构传感器信息融合方法。为了解决点云距离尺度存在实质性差异的问题,该文引入一种移动球体空间粗略定位算法,在相似特征的基础上缩小感兴趣尺度。此外,为了解决点云之间密度差异显著和重叠率低的挑战,我们提出了一种基于重叠率的改进 FPFH 粗配算法和基于相应点生成的增强型 ICP 精细配准算法。本文提出的方法应用于人机协作场景中来自 64 线 LiDAR 和深度相机的信息融合。实验结果表明,绝对平移精度为 4.29 cm,绝对旋转精度为 0.006 rad,满足人机协作背景下异构传感器信息融合的要求。