透過您的圖書館登入
IP:18.221.222.47
  • 學位論文

以多個彩色深度攝影機偵測路面可行駛區域之車輛環景影像系統

Drivable Space Detection with Multiple RGB-D Cameras in a Vehicle Around View System

指導教授 : 黃正民
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


一般而言,車輛環景系統都是以攝影機去對於周遭的環境做監控,但是對於使用者來說還是屬於一種被動的監控,本文期能將現有的車輛安全系統做進一步地改良。所以我們便使用了彩色深度(RGB-D)攝影機,取代了原本車輛系統的視覺感測器。彩色深度攝影機除了能夠得到色彩資訊之外,還能擷取所看到物體的深度影像。我們會將深度影像以及色彩影像整合建立出三維點雲圖(point cloud),藉此希望能提供一種主動監控周遭環境的系統,在接近障礙物時自動提醒駕駛者。 在偵測地面的可行駛區域時,我們使用隨機抽樣一致性演算法(RANSAC)去對點雲切割找出地平面。為減少計算時的資料量並提升判斷正確性,將以幾何關係去篩選點雲資料,產生粗略的地面位置,再去切割出數個地平面的候選者,進而以平面法向量資訊去判斷出地平面的方程式。並即時更新地面參數,給之後篩選點雲機制做為參考。最後經由型態學的斷開(opening)操作,過濾出地平面上的可行駛區域,同時於駕駛輔助影像上標記出可行駛區域和障礙物位置。另一方面,我們架設數台彩色深度攝影機環繞車周。為了要建立出環景三維點雲資料,我們利用迭代最近點演算法(ICP)來對不同攝影機的點雲結構作匹配,以將數台彩色深度攝影機所擷取的點雲資料結合,建立出車輛周遭完整的三維點雲結構。並且藉由將三維點雲資料進行投影,來提供駕駛者可由多個視角觀看環境影像。而當有障礙物靠近車輛的時候,也能夠主動地辨別障礙物距離車輛遠近,當障礙物靠近時主動切換至方便觀察的攝影機視角。

並列摘要


This paper aims to improve the around view monitoring system on a vehicle. The around view monitoring systems general provide the vehicle surrounding environment information for the drivers passively. In this thesis, we proposed to utilize the RGB-D camera, which can capture RGB images and depth images, to replace the original RGB camera sensor. The depth and the color images of a RGB-D camera are integrated to construct the 3D point cloud structure. Through analyzing the 3D point cloud data, we desire to build a new around view monitoring system to observe the vehicle surroundings and actively provide the alarm when approaching obstacles. In order to reduce the computational time and increase the estimating accuracy of ground plane, the point cloud data is first filtered by considering the geometry of feasible ground planes around the vehicle. The random sample consensus (RANSAC) algorithm is then applied to estimate the parameters of several ground plane candidates. By evaluating the relative poses between the camera and ground plane candidates, we could obtain the ground plane estimation. Finally, the opening operation of morphology is utilized to segment the drivable space, which will also be labelled in the driving assistant image. On the other hand, multiple RGB-D cameras are deployed around the vehicle. The iterative closest point (ICP) algorithm is employed to aligning the point cloud data from individual camera system for generating the 3D around view point cloud data. The driver could manually select a desired viewing direction to observe the vehicle surroundings by perspective projecting the 3D around view point cloud data. The proposed around view monitoring system also could dynamically and actively select the proper viewing to the obstacles when they are close to the vehicle.

參考文獻


[2] Y. Pei-Hsuan, Y. Kuo-Feng, and T. Wen-Hsiang, "Real-Time Security Monitoring Around a Video Surveillance Vehicle With a Pair of Two-Camera Omni-Imaging Devices," IEEE Transactions on Vehicular Technology, vol. 60, pp. 3603-3614, 2011.
[3] K.-Y. L. Y.-C. Liu, and Y.-S. Chen, "Bird's-Eye View Vision System for Vehicle Surrounding Monitoring," in Proceeding of Robot Vision, pp. 207-218, 2008.
[5] M. Narayana, A. Hanson, and E. Learned-Miller, "Coherent Motion Segmentation in Moving Camera Videos Using Optical Flow Orientations," in Proceeding of IEEE International Conference on Computer Vision (ICCV), pp. 1577-1584, 2013.
[6] W. Zhongli and Z. Jie, "Optical flow based plane detection for mobile robot navigation," in Proceeding of 9th World Congress on Intelligent Control and Automation (WCICA), pp. 1156-1160, 2011.
[7] T. Ehlgen, T. Pajdla, and D. Ammon, "Eliminating Blind Spots for Assisted Driving," IEEE Transactions on Intelligent Transportation Systems, vol. 9, pp. 657-665, 2008.

延伸閱讀