透過您的圖書館登入
IP:3.139.83.7
  • 學位論文

自走車的立體障礙物偵測

Real obstacle detection for autonomous vehicle

指導教授 : 曾定章
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


在自動化的移動載具或設備上,如果有會隨著人或特定物體跟隨的自走車,可以應用在各種不同的環境下;例如,協助運貨或是移動人員。這類自走車可以跟隨前導者移動軌跡往前行進,但在沿途上可能會有移動的障礙物出現;因此要有一個方法來偵測自走車的移動軌跡上是否有影響前進的障礙物,以避免碰撞。這些碰撞狀況包括撞擊路徑上的障礙物或是過於過靠近前方物體;因此本研究的主要目的是在自走車上裝置單眼相機,偵測前方障礙物並分辨是否為真正障礙物;再即時調整自走車的速度以避免碰撞。 我們的偵測系統包含以下幾個步驟。首先影像中擷取邊點做為偵測障礙物的特徵點;接著將影像分割成一個個 cell,再計算每個 cell 的 HOG 特徵,並且只保留 cell 中有明顯反應的方向資料做為可能是障礙物邊緣的特徵,以減少雜訊的干擾。第三,將單一畫面分為三種解析度後將這些特徵點做金字塔的光流估計,以便更快更精確地獲得正確特徵點的移動向量。第四,將同一個平面的光流向量調整成為與位置相關,讓接下來的群聚可更加正確。第五,依照光流長度及顏色資訊將特徵點群聚成不同的區塊;並將可能為平面物體的區塊去除,再將重疊區塊刪除後,剩餘的區塊就當作是真實世界中立體的障礙物。最後,依照障礙物距離自走車的距離,來判斷是否要調整自走車行進的速度依據。 自走車的立體障礙物偵測系統是建立於電動輪椅的自主車,在自主車上架設影像擷取裝置,輸入影像為解析度為320×240,由 Intel CoreTM i3-2370M 2.4GHz 及 8GB RAM 的個人電腦上執行自走車的立體障礙物偵測系統,偵測速度可達每秒 20 至 30 張畫面,正確率可達 90%。

關鍵字

自走車 障礙物 光流

並列摘要


There are few kinds of automatic mobile platforms can move by following a person or a specific subject in various situations such as carrying commodity or people. These platforms can move by following the path of their guides, however, there might be some moving obstacles on their path. As a result, there should be some methods to detect whether there are real obstacles and prevent collisions. Our research is focused on detecting the obstacles in front of the autonomous vehicle and controlling the velocity of the vehicle in real-time by arranging a camera on the platform. The method of obstacles detecting includes following steps. In the first step, the characteristic points are selected from the edge points the image for detecting the obstacles, in additional, split the image into a number of cells, then calculate HOG characteristics of each cell, and retain only the cell in response to an obviously direction information may be characterized as an obstacle edge to reduce interference noise. In the third step, a single image is separated in three different resolutions and the motion vectors are calculated more accurately and more efficiently by Lucas-Kanade method. In the fourth step, to make the following clustering be more accurate, the optical flow in the same plat are adjusted to be related with their positions. In the fifth step, different areas are clustered according to the length of the vectors and the color information. Moreover, the areas which may be plane subjects are removed and the regions which are overlapped are removed. As a result, the rest areas can be considered as the real obstacles in real world. Finally, the velocity of the autonomous vehicle is controlled according to the distance between the obstacles and the autonomous vehicle. The real obstacle detection for autonomous vehicle is going to build on a electric wheelchair. The image detect device on the vehicle can be entered resolution of 240 x 320 images, in additional, the detect system is executed through a personal computer with Intel CoreTM i3-2370M 2.4GHz and 8GB RAM and the frame rate is 20 to 30 frames per second, as a result, the detection rate is about 90 %.

並列關鍵字

autonomous vehicle obstacle optical flow

參考文獻


[3] Bertozzi, M. and A. Broggi, "GOLD: a parallel real-time stereo visionsystem for generic obstacle and lane detection," IEEE Trans. on Image Processing, vol.7, no.1, pp.62-81, 1998.
[7] Enkelmann, W., "Obstacle detection by evaluation of optical flow fields from image sequences," Image and Vision Computing, vol.9, no.3, pp.160-168, 1991.
[8] Fernando, W. S. P., L. Udawatta, and P. Pathirana, "Identification of moving obstacles with pyramidal Lucas Kanade optical flow and k means clustering," in Proc. 3rd Int. Conf. on Information and Automation for Sustainability, Melbourne, Australia, Dec.4-6, 2007, pp.111-117.
[9] Gandhi, T. and M. M. Trivedi, "Parametric ego-motion estimation for vehicle surround analysis using an omnidirectional camera," Machine Vision and Applications, vol.16, no.2, pp.85-95, 2005.
[10] Gandhi, T. and M. M. Trivedi, "Vehicle surround capture: survey of techniques and a novel omni-video-based approach for dynamic panoramic surround maps," IEEE Trans. on Intelligent Transportation Systems, vol.7, no.3, pp.293-308, 2006.

延伸閱讀