当前位置: X-MOL 学术Crop Prot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Research on improvement strategies for a lightweight multi-object weed detection network based on YOLOv5
Crop Protection ( IF 2.5 ) Pub Date : 2024-08-26 , DOI: 10.1016/j.cropro.2024.106912
Jiandong Sun , Jinlong You , Fengmei Li , Jianhong Sun , Mengjiao Yang , Xueguan Zhao , Ning Jin , Haoran Bai

Traditional weed detection technology has several limitations, including low detection accuracy, substantial computational demands, and large-scale detection models. To meet the requirements of weed multi-target identification and portability, this study proposes the YOLO–WEED model for weed recognition. The proposed model has the following innovations: (1) The backbone standard convolution module in YOLOv5 was replaced by the lightweight MobileNetv3 network to simplify the network structure and reduce parameter complexity; (2) The addition of convolutional block attention module (CBAM) to the neck network enabled the model to focus on the most important features while filtering out noise and irrelevant information; (3) To further improve classification accuracy and reduce loss, the C2f module was employed to improve the C3 module in the neck network; and (4) During the model plot process, a coordinate variable was added in the box label to help the model accurately locate the weeds. In the study, six species of weeds and one crop were used as test subjects. After image enhancement techniques were used, ablation experiments were deployed. The experimental results indicated that the YOLO–WEED model achieved an average accuracy of 92.5% in identifying six types of weeds and one type of crop. The accuracies for each type of plant were 82.7%, 97.3%, 98.8%, 86%, 93.5%, 99.3% and 89.6%, respectively. The number of model parameters was reduced by 39.4% compared with YOLOv5s. Furthermore, the localisation, classification and object losses were reduced by 0.025, 0.005 and 0.014, respectively. The model optimisation and deployment of the Jetson mobile terminal for multi-target detection were realised, and the performance was better than six network models such as YOLOv5s.

中文翻译:


基于YOLOv5的轻量级多目标杂草检测网络改进策略研究



传统的杂草检测技术存在一些局限性,包括检测精度低、计算量大、检测模型规模大。为了满足杂草多目标识别和可移植性的要求,本研究提出了用于杂草识别的YOLO-WEED模型。所提出的模型具有以下创新点:(1)将YOLOv5中的骨干标准卷积模块替换为轻量级的MobileNetv3网络,简化网络结构,降低参数复杂度; (2)在颈部网络中添加卷积块注意力模块(CBAM),使模型能够关注最重要的特征,同时滤除噪声和不相关信息; (3)为了进一步提高分类精度并减少损失,采用C2f模块改进颈网络中的C3模块; (4)模型作图过程中,在框标签中添加坐标变量,帮助模型准确定位杂草。在这项研究中,六种杂草和一种作物被用作测试对象。使用图像增强技术后,进行了消融实验。实验结果表明,YOLO-WEED模型在识别6种杂草和1种作物方面的平均准确率达到92.5%。每种类型植物的准确度分别为 82.7%、97.3%、98.8%、86%、93.5%、99.3% 和 89.6%。模型参数数量比YOLOv5s减少了39.4%。此外,定位、分类和对象损失分别减少了 0.025、0.005 和 0.014。实现了Jetson移动端多目标检测的模型优化和部署,性能优于YOLOv5s等六种网络模型。
更新日期:2024-08-26
down
wechat
bug