行車中,駕駛者的缺乏專注力往往是造成行車事故發生的原因,本論文的目的在建立一套駕駛者輔助系統,此系統可應用於智慧型車輛,來避免由於駕駛者的不當駕駛造成的事故發生。本篇論文使用第一人稱視覺架構,有別於傳統固定式攝影機的環境,是由使用者配戴可攜式攝影機直接取得前方影像,代表使用者眼睛所觀察到的。本論文共有兩個主要的技術:車內外視角偵測及視角角度估測。在第一項技術,我們利用了“bag of words”影像分類的方法,事先蒐集資料庫並應用FAST+BRIEF的特徵萃取方法。之後我們利用此方法將輸入影像轉換成特徵向量,將此利用SVM分類器來偵測輸入影像是車內或車外,進而辨別駕駛者目前的專注程度。在第二項技術,我們額外架設一台行車紀錄器來擷取車輛前方的影像並且作為視角角度估測的參考座標軸。透過雙攝影機的影像特徵比對方法,來估測兩台攝影機的世界座標運動轉換關係,進而估測駕駛者目前的視角方向。本論文實驗結果顯示,我們可以達到即時的估測出駕駛者的視角方向,以及在駕駛者視角車內影像偵測也有良好的準確程度。
This thesis proposes an intelligent vehicle system to identify the driver improper driving behavior using the so-called “first-person vision” (FPV) technology. Different from conventional computer vision, FPV is based on the person wearing a goggles camera representing the subject vision. There are two technologies proposed in this thesis: vehicle exterior/interior view detection and driver viewing angle estimation. For the first method, we use “bag of words” image classification approach by applying FAST+BRIEF feature descriptor in the dataset collected in advance. Then, we establish the first-person vision “vocabulary dictionary”, encoding an input image into a feature vector. Finally, we apply SVM classifier to detect whether the input image is from the inside of a vehicle, and further identifying the driver current attention. For the second method, we install an extra vehicle-mounted camera to record the image in front of the vehicle and to deem it as world coordinate reference for viewing angle estimation. Then, we find the relationship between the world and first-person-vision camera coordinate. Finally, we may further estimate the viewing direction of driver.