当前位置:
X-MOL 学术
›
ISPRS J. Photogramm. Remote Sens.
›
论文详情
Our official English website, www.x-mol.net, welcomes your
feedback! (Note: you will need to create a separate account there.)
VNI-Net: Vector neurons-based rotation-invariant descriptor for LiDAR place recognition
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 10.6 ) Pub Date : 2024-10-01 , DOI: 10.1016/j.isprsjprs.2024.09.011 Gengxuan Tian, Junqiao Zhao, Yingfeng Cai, Fenglin Zhang, Xufei Wang, Chen Ye, Sisi Zlatanova, Tiantian Feng
ISPRS Journal of Photogrammetry and Remote Sensing ( IF 10.6 ) Pub Date : 2024-10-01 , DOI: 10.1016/j.isprsjprs.2024.09.011 Gengxuan Tian, Junqiao Zhao, Yingfeng Cai, Fenglin Zhang, Xufei Wang, Chen Ye, Sisi Zlatanova, Tiantian Feng
Despite the emergence of various LiDAR-based place recognition methods, the challenge of place recognition failure due to rotation remains critical. Existing studies have attempted to address this limitation through specific training strategies involving data augment and rotation-invariant networks. However, augmenting 3D rotations (SO ( 3 ) ) is impractical for the former, while the latter primarily focuses on the reduced problem of 2D rotation (SO ( 2 ) ) invariance. Existing methods targeting SO ( 3 ) rotation invariance suffer from limitations in discriminative capability. In this paper, we propose a novel approach (VNI-Net) based on the Vector Neurons Network (VNN) to achieve SO ( 3 ) rotation invariance. Our method begins by extracting rotation-equivariant features from neighboring points and projecting these low-dimensional features into a high-dimensional space using VNN. We then compute both Euclidean and cosine distances in the rotation-equivariant feature space to obtain rotation-invariant features. Finally, we aggregate these features using generalized-mean (GeM) pooling to generate the global descriptor. To mitigate the significant information loss associated with formulating rotation-invariant features, we propose computing distances between features at different layers within the Euclidean space neighborhood. This approach significantly enhances the discriminability of the descriptors while maintaining computational efficiency. We conduct experiments across multiple publicly available datasets captured with vehicle-mounted, drone-mounted LiDAR sensors and handheld. VNI-Net outperforms baseline methods by up to 15.3% in datasets with rotation, while achieving comparable results with state-of-the-art place recognition methods in datasets with less rotation. Our code is open-sourced at https://github.com/tiev-tongji/VNI-Net .
中文翻译:
VNI-Net:用于 LiDAR 位置识别的基于向量神经元的旋转不变描述符
尽管出现了各种基于 LiDAR 的地点识别方法,但因旋转而导致地点识别失败的挑战仍然严峻。现有的研究试图通过涉及数据增强和旋转不变网络的特定训练策略来解决这一限制。然而,增强 3D 旋转 (SO(3)) 对于前者来说是不切实际的,而后者主要关注 2D 旋转 (SO(2)) 不变性的简化问题。针对 SO(3) 旋转不变性的现有方法在判别能力方面存在局限性。在本文中,我们提出了一种基于矢量神经元网络 (VNN) 的新方法 (VNI-Net) 来实现 SO(3) 旋转不变性。我们的方法首先从相邻点提取旋转等变特征,并使用 VNN 将这些低维特征投影到高维空间中。然后,我们在旋转等变特征空间中计算欧几里得和余弦距离,以获得旋转不变特征。最后,我们使用广义均值 (GeM) 池化来聚合这些特征,以生成全局描述符。为了减轻与构建旋转不变特征相关的重要信息损失,我们提出了计算欧几里得空间邻域内不同层特征之间的距离。这种方法在保持计算效率的同时,显著提高了描述符的可区分性。我们在使用车载、无人机安装 LiDAR 传感器和手持设备捕获的多个公开可用的数据集上进行实验。VNI-Net 在旋转数据集中的性能比基线方法高出 15.3%,同时在旋转较少的数据集中实现与最先进的地点识别方法相当的结果。 我们的代码在 https://github.com/tiev-tongji/VNI-Net 是开源的。
更新日期:2024-10-01
中文翻译:
VNI-Net:用于 LiDAR 位置识别的基于向量神经元的旋转不变描述符
尽管出现了各种基于 LiDAR 的地点识别方法,但因旋转而导致地点识别失败的挑战仍然严峻。现有的研究试图通过涉及数据增强和旋转不变网络的特定训练策略来解决这一限制。然而,增强 3D 旋转 (SO(3)) 对于前者来说是不切实际的,而后者主要关注 2D 旋转 (SO(2)) 不变性的简化问题。针对 SO(3) 旋转不变性的现有方法在判别能力方面存在局限性。在本文中,我们提出了一种基于矢量神经元网络 (VNN) 的新方法 (VNI-Net) 来实现 SO(3) 旋转不变性。我们的方法首先从相邻点提取旋转等变特征,并使用 VNN 将这些低维特征投影到高维空间中。然后,我们在旋转等变特征空间中计算欧几里得和余弦距离,以获得旋转不变特征。最后,我们使用广义均值 (GeM) 池化来聚合这些特征,以生成全局描述符。为了减轻与构建旋转不变特征相关的重要信息损失,我们提出了计算欧几里得空间邻域内不同层特征之间的距离。这种方法在保持计算效率的同时,显著提高了描述符的可区分性。我们在使用车载、无人机安装 LiDAR 传感器和手持设备捕获的多个公开可用的数据集上进行实验。VNI-Net 在旋转数据集中的性能比基线方法高出 15.3%,同时在旋转较少的数据集中实现与最先进的地点识别方法相当的结果。 我们的代码在 https://github.com/tiev-tongji/VNI-Net 是开源的。