透過您的圖書館登入
IP:3.129.13.201
  • 學位論文

利用基於單眼影像的車道線偵測與道路標記辨識之整合型車輛定位系統

An Integrated Vehicle Localization System Using Vision-Based Lane Detection and Road Marker Recognition

指導教授 : 連豊力

摘要


精準車輛定位是發展自動駕駛系統中不可或缺的先決條件。其系統必須達到 車道級的定位精度以確保安全導航。在目前商用的解決方案中,全球定位系統(GPS) 是最為廣泛使用的技術。然而,GPS 可能會受到遮擋、大氣擾動的多路徑效應等影 響進而產生無法滿足車道級定位精度的誤差。為了彌補 GPS 不準確的定位結果, 近年來基於感測器融合的車輛定位方法被廣泛的研究。其中融合了來自多種感測 器(如相機、光達和雷達)的不同測量值以感知自身車輛周圍的特徵,並且將其和已 知的地圖比對以進一步的修正 GPS 的誤差。 本文提出了一個基於視覺的感知系統,利用單目相機來偵測道路標記與車道線以提供可靠的視覺測量結果。選擇道路標記與車道線作為欲偵測高階特徵是由於和其他的物體相比,道路標記與車道線具有視覺上獨特的外觀並且能夠容易地被標註在地圖中。路標辨識演算法使用基於模板匹配的方法來辨識路標類型並估計其相對應的位置以及方向。然而,比對每一個可能的位置與方向是非常耗時的。 透過結合車輛的動態資訊,提出的演算法可以顯著地減少路標辨識的搜尋時間以 滿足系統實時運作的需求。在車道線偵測中,通過本文提出的梯度方向一致性與基 於 Inverse Perspective Mapping (IPM)的空間約束,清晰可見的車道線在初始階段中 被準確地初始化。透過先前初始化的結果,基於時序上整合的方法結合了連續時間 內的色彩資訊以及前一時刻偵測到的車道線方向。本文提出的偵測方法在具有挑 戰性的光影變化以及路面被陰影覆蓋的場景中,也能夠有效地追蹤車道線。 最後,感測器融合演算法採用粒子濾波器來有效地融合視覺測量結果、慣性感 測器(IMU)與 GPS 以修正 GPS 誤差,其中路標與車道線的測量結果分別提供相對 於自身車輛的側向偏移與相對位置資訊,慣性感測器則是用來估計自身車輛的動態行為。 本文提出的車輛定位系統在不同光照條件的實際車輛行駛場景下進行評估,實驗結果表明,通過準確的路標辨識與車道線偵測結果,定位的準確度可以達到車 道級精度的要求。

並列摘要


Precise localization is a prerequisite for completing the autonomous driving system. For safe navigation, lane-level accuracy with an error within meter is required for vehicle localization. In current commercially available solution, Global Positioning System (GPS) is most widely used. However, error in the GPS position suffers from occlusions, multipath-effects of atmospheric disturbances failed the required accuracy of the localization system. To compensate the unreliable GPS measurement, sensor fusion- based localization approaches are extensively researched in recent years, which fuses various measurements collected by different sensors, such as camera, LiDAR, and Radar to perceive the features around the ego-vehicle and match them with a known map to further correct the GPS errors. In this thesis, the proposed vision-based perception system uses a monocular camera to detect the high-level features in the form of road markers and driving lanes for providing reliable visual measurements. Road markers and driving lanes are chosen because of the visually distinctive appearance compared to other objects as well as being easily annotated in the digital map. For road marker recognition, template matching-based method is applied to recognize the marker type and estimate the corresponding position and orientation as well. However, it is extremely time-consuming for matching all possible position and orientation, the proposed method incorporates the vehicle motion to drastically reduce the searching time of road marker, for achieving real time demanding. For driving lane detection, the proposed gradient orientation consistency combined Inverse Perspective Mapping (IPM) spatial constraints is used to initialize the clearly visible lanes in the initial detection. With the knowledge of initialized lanes, temporal integration method is applied to combine the temporal color information and detected orientation of lanes for tracking the potential candidates of driving lanes in the challenging scenarios, such as varying illumination condition and pavement covered by casting shadows. Finally, Particle Filter is employed to integrate the visual measurements, Inertial Measurement Unit (IMU), and GPS complementarily for correcting the GPS errors, where lane measurement and road marker measurement provide the lateral offset and relative position information with respect to the ego-vehicle respectively, and IMU is used to estimate the dynamic behavior of ego-vehicle. The proposed vehicle localization system is evaluated in the real driving scenario of ITRI campus within varying illumination condition and experiment results show that the lane-level localization accuracy is achieved by detecting road markers and driving lanes robustly.

參考文獻


[1: Afia et al. 2014] A. Ben-Afia, L. Deambrogio, D. Salos, A. C. Escher, C. Macabiau, L. Soulier, and V. Gay-Bellile, “Review and Classification of Vision-based Localization Techniques in Unknown Environments,” IET Rader, Sonar and Navigation, vol. 8, no. 9, pp. 1059–1072, Dec. 2014.
[2: González et al. 2016] D. González, J. Pérez, V. Milanés, and F. Nashashibi, “ A Review of Motion Planning Techniques for Automated Vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 1135–1145, Apr. 2016.
[3: Paden et al. 2016] B. Paden, M. Cáp, S. Zheng, S. Z. Yong, D. Yershov, and E. Frazzoli, “ A Survey of Motion Planning and Control Techniques for Self-Driving Urban Vehicles,” IEEE Transactions on Intelligent Vehicles, vol. 1, no. 1, pp. 33–55, Mar. 2016.
[4: Zhu et al. 2017] H. Zhu, K. V. Yuen, L. Mihaylova, and H. Leung, “Overview of Environment Perception for Intelligent Vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 10, pp. 2584–2601, Oct. 2017.
[5: Skog & Händel. 2009] I. Skog and P. Händel, “In-Car Positioning and Navigation Technologies—A Survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 10, no. 1, pp. 4–21, Mar. 2009.

延伸閱讀