当前位置: X-MOL 学术Autom. Constr. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-supervised monocular depth estimation on construction sites in low-light conditions and dynamic scenes
Automation in Construction ( IF 9.6 ) Pub Date : 2024-11-05 , DOI: 10.1016/j.autcon.2024.105848
Jie Shen, Ziyi Huang, Lang Jiao

Estimating construction scene depth from a single image is crucial for various downstream tasks. Self-supervised monocular depth estimation methods have recently achieved impressive results and demonstrated state-of-the-art performance. However, the low-light conditions and dynamic scenes on construction sites pose significant challenges to these methods, hindering their practical deployment. Therefore, an architecture called LLD-Depth is presented to address these challenges, including an improved ForkGAN model to generate paired low-light images from clear-day images, a new unifying learning method for accurately estimating monocular depth, motion flow, camera ego-motion, and its intrinsic parameters, as well as a training framework to estimate monocular depth under both low-light and clear-day conditions effectively. Finally, the effectiveness of monocular depth estimation in construction scenes is verified. LLD-Depth brings 16.67% and 20.17% gain in relative mean error for clear-day and low-light scenes and 2.60% and 1.80% gain in average order accuracy, achieving state-of-the-art performance.

中文翻译:


在低光照条件和动态场景中对建筑工地进行自监督单目深度估计



从单个图像估计施工场景深度对于各种下游任务至关重要。自监督单目深度估计方法最近取得了令人印象深刻的结果,并展示了最先进的性能。然而,建筑工地的弱光条件和动态场景对这些方法提出了重大挑战,阻碍了它们的实际应用。因此,提出了一种称为 LLD-Depth 的架构来应对这些挑战,包括一种改进的 ForkGAN 模型,用于从晴天图像生成成对的低光图像,一种新的统一学习方法,用于准确估计单眼深度、运动流、相机自我运动及其内在参数,以及一个训练框架,用于在低光和晴天条件下有效估计单眼深度。最后,验证了单目深度估计在施工场景中的有效性。LLD-Depth 为晴天和低光场景带来了 16.67% 和 20.17% 的相对平均误差增益,平均阶次精度提高了 2.60% 和 1.80%,实现了最先进的性能。
更新日期:2024-11-05
down
wechat
bug