透過您的圖書館登入
IP:3.139.238.226
  • 學位論文

運用即時車載視覺之週邊車輛偵測系統

Detection of Surrounding Vehicles with Real-Time Onboard Vision

指導教授 : 張文中
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


本論文提出一個以視覺為基礎並同時具有線上學習能力的即時車載視覺之週邊車輛偵測系統。此系統利用部份偵測的概念與樹狀的偵測系統架構,可同時利用多個具線上學習能力的辨識系統偵測前方縱向與橫向移動的各式車輛。此辨識系統中的學習系統是一種基於多個串聯式強分類器的新型線上學習Adaboost 學習器。現存的Adaboost 學習系統多為離線學習, 且無法在需要線上學習時有效的進一步學習。因此,本系統的主要概念即在於提出一新的學習架構使得車輛偵測系統 能夠以線上學習的方式來面對各式車輛的偵測與環境的變化。為了達成此線上學習的功能, 在此提出之學習系統將可以根據即時擷取的影像與現存各個弱分類器的狀態來動態調整系統參數。因此, 藉由不斷擷取的新進影像, 此系統將會不斷的調整系統參數以持續增加系統判斷的正確率與偵測率以面對新型的車輛與新遭遇的環境,而傳統的訓練器要達到同樣的辨識效果需要更大量的訓練,且始終無法達到隨系統的需要而線上學習。此具有線上學習功能的周邊車輛偵測系統以一架設在車中後視鏡且面向前方的CCD 攝影機與一個人電腦在一般公路上驗證其系統的效果, 並以提出之學習系統成功達成了在各種環境下辨識前方縱向與橫向移動的各種車輛,驗證了此一系統的可行性。

並列摘要


This paper presents a detection system for surrounding vehicles employing an on-line boosting algorithm with real-time onboard vision. The system employed both part-based detection and decision tree approaches to detect front, rear, and side vehicles. The boosting algorithm is a new type of on-line Adaboost approach consisting of a cascade of strong classifiers. Most existing cascades of classifiers must be trained off-line and cannot be effectively updated when on-line tuning is required. The idea is to develop a cascade of strong classifiers capable of being on-line trained for vehicle detection in response to changing traffic environments. To make the on-line algorithm tractable, the proposed system is required to efficiently tune parameters based on incoming images and up-to-date performance of each weak classifier. The proposed on-line boosting method can improve system adaptability and accuracy as time goes on to deal with novel types of vehicles and unfamiliar environments, whereas existing off-line methods rely on much more extensive training processes to reach comparable results and cannot be further updated on-line. Our approach has been successfully validated in real traffic environments by performing experiments with a CCD camera mounted onboard a highway vehicle.

參考文獻


[2] W.-C. Chang. Hybrid force and vision-based contour following of planar robots. Journal of Intelligent and Robotic Systems, 47(3):215–237, November 2006.
[5] W.-C. Chang. Precise positioning of binocular eye-to-hand robotic manipulators. Journal of Intelligent and Robotic Systems, 49(3):219–236, July 2007.
[6] W.-C. Chang. Binocular vision-based trajectory following for autonomous robotic manipulation. Robotica, 25(5):615–626, September 2007.
[7] W.-C. Chang and C.-W. Cho. Active head tracking using integrated contour and template matching in indoor cluttered environment. In Proc. of the IEEE Int’l Conf. on Systems, Man and Cybernetics, pages 5167–5172, Taipei, Taiwan, Oct 2006.
[8] W.-C. Chang and C.-W. Cho. Automatic mobile robotic manipulation with active eye-to-hand binocular vision. In Proc. of the 33rd Annual Conference of the IEEE Industrial Electronics Society, Taipei, Taiwan, November 2007.

延伸閱讀