透過您的圖書館登入
IP:3.129.211.6
  • 學位論文

由4D-BIM及實景模型輔助之施工監測相機位置最佳化

4D-BIM and reality model-driven camera placement optimization for construction monitoring

指導教授 : 林之謙
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


近年來,由於數據蒐集能力的持續進步及建立感測器網絡成本的降低,感測器在土木建築領域中之應用逐年成長,並在BIM技術整合下可建立即時數據蒐集功能之數位孿生(Digital Twin),提供施工現場最即時的資料,協助現場營建相關人員執行最有效之決策。 在目前施工現場採用的感測器類型中,攝影機以高分辨率圖像和影片的形式提供豐富的可視化數據,其中這些資料廣泛應用於工程進度記錄、工地保全及工安等風險管理方面,伴隨著施工現場的複雜性和規模不斷提升,攝影機能蒐集的有效數據逐步增加,進而提高傳統人工監控及管理後續資料之時間及金錢成本。過去研究使用電腦視覺及深度學習來解決各種工程問題,如:自動進度檢測、工人安全分析和生產力分析等,然而,可視化數據質量可能會受到工作現場遮蔽物、覆蓋率不足和標的物尺寸大小之影響而降低。傳統上這些影響必須通過專業人員規劃放置攝像機以排除相關影響,即使有專業人員執行,此步驟依然相當耗時。 本研究提出一套結合BIM及現場模型之攝影機位置優化框架,透過視覺化深度學習模型提供之最佳偵測距離,作爲施工現場人員進行購買攝影機及放置決策時參考。本架構首先透過專家訪談及問卷調查進行攝影機放置參數之調查,再透過提取RCNN神經網絡中的準確率物體面積關係以判釋各種施工現場設備的最佳偵測距離;再利用已建設之攝影機放置參數建立數學模型、目標函數及限制,搜尋空間建立應用BIM模型及實景模型來產出覆蓋空間,遮蔽物及攝影機安裝位置;最後,這些參數由4DGA優化模組利用基因演算法,在考慮工程進度下進行攝影機數量、位置、擺放方向之最佳化。 本研究框架在一個正在進行的施工現場進行了實測和評估:實驗結果表明,與專業人員設置的攝影機位置相比,找到一系列改進的解決方案。這些方案會根據兩種分析方式,最大化覆蓋率及最大化邊際覆蓋率來排序得到的解決方法。最終選擇的兩個方案皆可提供優於專業人員設置的攝影機的覆蓋率,其第一個分析方式之最佳方案為覆蓋率可達到80.33%,8台攝影機的方案;第二個分析方式之下,為邊際覆蓋率達到12.37%的3台攝影機方案。這些結果可視化後呈現給專業人員,比對最佳結果以進行評估。

並列摘要


The advances in sensor technology have enabled real-time high-quality data collection in the construction sector. Cameras are common on job sites for purposes of progress documentation, site security, and site safety. Deep learning applications can assist in automating visual data analysis for project metrics. However, data quality can be affected by job-site occlusions, lacking coverage, and small target object sizes. Traditionally, this is mitigated by the manual placement of cameras, which demanded experienced practitioners. To address these issues, this study proposes a 4D BIM and point cloud-based camera placement optimization framework that incorporates both planned progress and actual site conditions, while providing reference ranges derived from deep-learning frameworks trained on common construction site objects. First, the camera placement determinants are identified through interviews and surveys, and deep learning optimal ranges are extracted from trained RCNN networks. Then, the determinants are formulated into optimization objectives and constraints Next, the search space generation module extracts coverage areas, occlusion sources, and installation locations from planned and reality models to be used in the optimization process using the genetic algorithm. The optimized solutions are then analyzed for coverage and cost trade-offs. The results are presented to experts and then evaluated using the camera placement determinants to identify the optimal solutions in the optimization results. The proposed method is evaluated on an ongoing construction site, and experimental results had shown a series of improved solutions can be found when compared to the benchmark setup by professional practitioners. Two scenarios were considered for result selection, maximization of total coverage and optimization of marginal coverage. The best solutions, a 8-camera solution achieved a total coverage of 80.33\%; and a 3-camera solution achieved a marginal coverage of 12.37% and a total coverage of 52.78%, with both solutions surpassing the total coverage of the benchmark solution at 42.92% total coverage. The results are then visualized and presented to practitioners to rank the results and confirm the effectiveness of the framework. The study further contributes to the potential automation of operation-level visual monitoring on sites by incorporating characteristics of actual site conditions in the optimization process.

參考文獻


[1] A. H. Albahri and A. Hammad. Simulation-Based Optimization of Surveillance Camera Types, Number, and Placement in Buildings Using BIM. Journal of Computing in Civil Engineering, 31(6):04017055, 2017.
[2] T. Blickle and L. Thiele. A comparison of selection schemes used in evolutionary algorithms. Evolutionary Computation, 4(4):361–394, 1996.
[3] A. Braun, S. Tuttas, A. Borrmann, and U. Stilla. Improving progress monitoring by fusing point clouds, semantic data and computer vision. Automation in Construction, 116, aug 2020.
[4] A. Braun, S. Tuttas, A. Borrmann, and U. Stilla. Improving progress monitoring by fusing point clouds, semantic data and computer vision. Automation in Construction, 116:103210, 2020.
[5] I. Brilakis, M.-W. Park, and G. Jog. Automated vision tracking of project related entities. Advanced Engineering Informatics, 25(4):713–724, 2011. Special Section: Advances and Challenges in Computing in Civil and Building Engineering.

延伸閱讀