当前位置: X-MOL 学术Precision Agric. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Detection and localization of citrus picking points based on binocular vision
Precision Agriculture ( IF 5.4 ) Pub Date : 2024-07-28 , DOI: 10.1007/s11119-024-10169-2
Chaojun Hou , Jialiang Xu , Yu Tang , Jiajun Zhuang , Zhiping Tan , Weilin Chen , Sheng Wei , Huasheng Huang , Mingwei Fang

Accurate localization of picking points in non-structural environments is crucial for intelligent picking of ripe citrus with a harvesting robot. However, citrus pedicels are too small and resemble other background objects in color, making it challenging to detect and localize the picking point of citrus fruits. This work presents a novel approach for detecting and localizing citrus picking points using binocular vision. First, the convolutional block attention module (CBAM) attention model is integrated into the backbone network of Mask R-CNN to increase the feature extraction for citrus pedicels, and the soft-non maximum suppression (Soft-NMS) strategy is used in the region proposal network to enhance the detection performance of citrus pedicel. Second, to accurately associate the citrus fruit with the best detected pedicel, a maximum discrimination criterion is proposed by integrating the confidence score of the detected pedicel and the degree of positional connectivity between the pedicel and the fruit. Finally, to reduce matching errors and improve computational efficiency, a rapid and robust matching method based on the normalized cross-correlation was applied to search the picking point within the line segment between the left and right images. The experimental results show that the precision, recall and F1-score for pedicel detection are 95.04%, 88.11%, and 91.44%, respectively, which are improvement of 13.00%, 7.84%, and 10.30% compared to the original Mask R-CNN. The mean absolute error (MAE) for the localizing the citrus picking point is 8.63 mm and the mean relative error (MRE) is 2.76%. The MRE was significantly reduced by at least 1.2% compared to the stereo matching methods belief-propagation (BP), semi-global block matching (SGBM), and block matching (BM), respectively. This study provides an effective method for the precise detection and localization of citrus picking point for a harvesting robot.



中文翻译:


基于双目视觉的柑橘采摘点检测与定位



非结构环境中采摘点的准确定位对于采摘机器人智能采摘成熟柑橘至关重要。然而,柑橘的果梗太小,并且颜色与其他背景物体相似,这使得检测和定位柑橘类水果的采摘点具有挑战性。这项工作提出了一种利用双目视觉检测和定位柑橘采摘点的新方法。首先,将卷积块注意力模块(CBAM)注意力模型集成到Mask R-CNN的主干网络中,增加对柑橘花梗的特征提取,并在该区域采用软非极大值抑制(Soft-NMS)策略建议网络增强柑橘花梗的检测性能。其次,为了准确地将柑橘类水果与检测到的最佳蒂进行关联,通过整合检测到的蒂的置信度分数以及蒂和水果之间的位置连通性程度,提出了最大判别标准。最后,为了减少匹配误差并提高计算效率,采用基于归一化互相关的快速鲁棒匹配方法来搜索左右图像之间的线段内的拾取点。实验结果表明,花梗检测的精度、召回率和F1-score分别为95.04%、88.11%和91.44%,相比原始Mask R-CNN分别提高了13.00%、7.84%和10.30% 。柑橘采摘点定位的平均绝对误差(MAE)为8.63 mm,平均相对误差(MRE)为2.76%。 MRE 显着降低至少 1。与立体匹配方法置信传播 (BP)、半全局块匹配 (SGBM) 和块匹配 (BM) 相比,分别降低了 2%。该研究为采摘机器人柑橘采摘点的精确检测和定位提供了有效方法。

更新日期:2024-07-28
down
wechat
bug