当前位置: X-MOL 学术Autom. Constr. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Damage detection in concrete structures with multi-feature backgrounds using the YOLO network family
Automation in Construction ( IF 9.6 ) Pub Date : 2024-12-01 , DOI: 10.1016/j.autcon.2024.105887
Rakesh Raushan, Vaibhav Singhal, Rajib Kumar Jha

Image processing and Convolution Neural Networks (CNN) are widely used for structural damage assessment. Datasets with damages on similar backgrounds are commonly used in past studies for training and testing of CNN models. These models will often fail to detect damage in images of real infrastructure. A dataset is created which consists of 3750 real images along with its annotations, having diverse features with varying textures, colours, and architectural elements like windows and doors. This study evaluates the performance of You Only Look Once (YOLO) models (v3-v10) on the created dataset, training them in three distinct scenarios: scenario 1 (instances of damage ≤5), scenario 2 (instances of damage >5), and scenario 3 (the complete dataset). The YOLO models show promising results in detecting and locating damages in images with multi-featured backgrounds, wherein the YOLOv4 showed the best precision of 92.2 %, a recall of 86.8 %, and an F1 score of 88.9 %.

中文翻译:


使用 YOLO 网络系列对具有多特征背景的混凝土结构进行损伤检测



图像处理和卷积神经网络 (CNN) 广泛用于结构损伤评估。在以往的研究中,具有相似背景损伤的数据集通常用于训练和测试 CNN 模型。这些模型通常无法检测到真实基础设施图像中的损坏。创建一个数据集,其中包含 3750 张真实图像及其注释,具有不同的纹理、颜色和门窗等建筑元素的不同特征。本研究评估了 You Only Look Once (YOLO) 模型 (v3-v10) 在创建的数据集上的性能,在三种不同的场景中训练它们:场景 1(损害实例 ≤5)、场景 2(损害 >5 实例)和场景 3(完整数据集)。YOLO 模型在检测和定位具有多特征背景的图像中的损坏方面显示出有希望的结果,其中 YOLOv4 显示出 92.2% 的最佳准确率、86.8% 的召回率和 88.9% 的 F1 分数。
更新日期:2024-12-01
down
wechat
bug