Precision Agriculture ( IF 5.4 ) Pub Date : 2024-06-11 , DOI: 10.1007/s11119-024-10157-6 Mauro Martini , Marco Ambrosio , Alessandro Navone , Brenno Tuberga , Marcello Chiaberge
Introduction
Service robotics is recently enhancing precision agriculture enabling many automated processes based on efficient autonomous navigation solutions. However, data generation and in-field validation campaigns hinder the progress of large-scale autonomous platforms. Simulated environments and deep visual perception are spreading as successful tools to speed up the development of robust navigation with low-cost RGB-D cameras.
Materials and methods
In this context, the contribution of this work resides in a complete framework to fully exploit synthetic data for a robust visual control of mobile robots. A wide realistic multi-crops dataset is accurately generated to train deep semantic segmentation networks and enabling robust performance in challenging real-world conditions. An automatic parametric approach enables an easy customization of virtual field geometry and features for a fast reliable evaluation of navigation algorithms.
Results and conclusion
The high quality of the generated synthetic dataset is demonstrated by an extensive experimentation with real crops images and benchmarking the resulting robot navigation both in virtual and real fields with relevant metrics.
中文翻译:
通过有效的合成数据生成增强行间作物的视觉自主导航
介绍
服务机器人技术最近正在增强精准农业,使许多基于高效自主导航解决方案的自动化流程成为可能。然而,数据生成和现场验证活动阻碍了大规模自主平台的进步。模拟环境和深度视觉感知正在作为成功的工具而传播,以加速低成本 RGB-D 相机稳健导航的开发。
材料和方法
在这种情况下,这项工作的贡献在于一个完整的框架,可以充分利用合成数据来对移动机器人进行鲁棒的视觉控制。准确生成广泛的真实多作物数据集,以训练深度语义分割网络,并在具有挑战性的现实条件下实现稳健的性能。自动参数化方法可以轻松定制虚拟场几何形状和功能,从而快速可靠地评估导航算法。
结果与结论
通过对真实农作物图像进行广泛的实验,并使用相关指标对虚拟和真实田地中的机器人导航进行基准测试,证明了生成的合成数据集的高质量。