透過您的圖書館登入
IP:3.128.198.21
  • 學位論文

空拍機在裂縫量測之應用

Application of drones in crack measurement

指導教授 : 黃仲偉 莊清鏘
本文將於2025/09/03開放下載。若您希望在開放下載時收到通知,可將文章加入收藏

摘要


傳統的橋梁檢測,主要仰賴檢測人員的經驗及專業進行分析。然而目視檢測這種方式有很多缺點,除了需要較長的探勘時間之外,也常受到檢測人員主觀的因素影響,不一定能有統一的標準。本研究擬利用空拍機結合深度學習進行橋梁檢測,俾利克服人工檢測的各項缺點。對於檢測人員不易到達的區域,利用空拍機拍攝照片後,一方面利用深度學習YOLO自動偵測裂縫位置及數量;另一方面,對於偵測到的裂縫,本研究比較利用空拍機搭配全球導航系統(Global Navigation Satellite System,簡記為GNSS)、空拍機搭配即時動態技術(Real Time Kinematic,簡記為RTK)、空拍機搭配紅光雷射校正系統三種方式量測裂縫長度與寬度的精確度。數值結果顯示:在裂縫偵測的部分,經過一千張影像訓練後的YOLO系統,裂縫偵測的正確率為96.5% ,較傳統的邊緣偵測法為高。在裂縫量測的部分,本研究設計了7種不同寬度的裂縫,並且在距離裂縫50 cm、100 cm、150 cm處進行裂縫分析。結果顯示:在50 cm及100 cm處時,紅光雷射校正影像的精確度遠高於其他兩者;在150 cm處裂縫寬度大於1.65 mm時,RTK所得點雲資料的精確度較其他兩者高。整體而言,紅光雷射校正影像的精確度可隨影像解析度提升而改善,而GNSS與RTK的解析度有其上限。

並列摘要


Traditional bridge inspections mainly rely on the experience and expertise of inspectors for analyses. However, the visual inspection method has many shortcomings. In addition to requiring a long exploration time, the visual inspection method is often affected by the subjective factors of the inspectors and lacks of a unified standard. In order to overcome the shortcomings of manual detection, this study intends to use the unmanned aerial vehicle (UAV) and deep learning for bridge detection. For regions where are not easily accessible for inspectors, we can use UAV to take photos. On the one hand, deep learning YOLO is adopted to automatically detect the location and number of cracks in this study. On the other hand, this study compares the use of UAV with the global navigation satellite system (GNSS), UAV with the real-time dynamic technology (RTK), and UAV with red laser calibration system to measure the length and width of cracks. Numerical results show that the accuracy of the crack detection is 96.5% with the YOLO system trained on 1,000 images. Comparing with traditional edge detection methods, such as Canny and Sobel methods, the accuracy of YOLO is higher than that of traditional methods in crack detection. About crack measurements, this study designed seven kinds of cracks with different widths, and measured the cracks by UAV at 50 cm, 100 cm, and 150 cm away from the crack. The results show that at 50 cm and 100 cm, the precision of the red laser calibration system is much higher than the other two; when the crack width is greater than 1.65 mm, the precision of the point cloud data obtained by RTK at 150 cm is better than the other two. On the whole, the accuracy of red laser calibration system can be improved as the image resolution increases, while the resolution of GNSS and RTK has its upper limit.

參考文獻


Ellenberg, A. Kontsos, F. Moon and I. Bartoli, (2016), “Bridge related damage quantification using unmanned aerial vehicle imagery,” Struct. Control Health Monit. 2016; 23:1168–1179.
Brandon Rohrer, (2016), “How do Convolutional Neural Networks.” [Online]. Available: https://brohrer.github.io/how_convolutional_neural_networks_work.html.
Joseph Redmon, Santosh Divvala, Ross Girshick and Ali Farhadi, (2016).” You Only Look Once: Unified, Real-Time Object Detection.”
Kabir, S., Rivard, P., He, D.C., and Thivierge, P., (2009), “Damage assessment for concrete structure using image processing techniques on acoustic borehole imagery,”  Construction and Building Materials 23(10), pp.3166-3174.
Lei Zhang, Yunjeng Sun, Mingiing Li, Hongiiung Zhang, (2004). “Automated red-eye detection and correction in digital photographs,”Proc. of 2004 International Conference on Image Processing(ICIP), pp. 2363-2366.

延伸閱讀