透過您的圖書館登入
IP:18.118.12.222
  • 學位論文

車載型視覺式駕駛者疲倦昏睡偵測系統

An In-Vehicle Vision-Based Driver's Drowsiness Detection System

指導教授 : 陳世旺
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


經由報告顯示,續多交通事故都可歸咎於駕駛者的疲倦或勞累,因為疲倦會影響駕駛者的視野、警覺性與決策的能力,會使駕駛的效率與能力降低。 在本研究中,我們發展一視覺式的疲倦偵測與警告系統,可估計出駕駛者的潛在疲倦狀態,且做出一些警告。當假設駕駛者處於低疲倦狀態時,我們可以為駕駛者作一些干涉性較小的動作,例如:開啟空調系統、散播一些香水或打開收音機提供一些娛樂;當駕駛者處於高疲倦狀態時,我們可以開啟導航系統輔助或警告其他人本車駕駛處於高疲倦狀態。 疲倦程度的計算是藉由臉部影像來判斷,影像由置於車輛前方的攝影機取得。本系統主要包含了五大步驟:前處理、臉部特徵擷取、臉部追蹤、狀態參數的計算、疲倦程度的推測。 在前處理部份,我們首先降低輸入影像的維度以加快系統速度。接下來對光線作補償,以降低周遭環境光線的影響。最後我們對每像點計算其在不同色彩空間的色度值,提供之後臉部特徵擷取的使用。 臉部特徵擷取主要包含四個子步驟:膚色偵測、臉部定位、眼睛偵測、嘴巴偵測與特徵確認。膚色區域偵測由前處理所得到的色度值,經由膚色模型來計算。接下來臉部定位,我們搜尋最大的膚色區域,然而所得到的臉部區域通常都是不夠完整的,因此僅在不完整的膚色區域搜尋臉部特徵是不可靠的,所以我們實際上是在整張影像搜尋臉部特徵,而臉部區域則用來做確認臉部特徵的動作。 當臉部特徵確認後,我們即利用其作臉部的追蹤,直到臉部追蹤失敗,此時臉部特徵擷取才會再次被執行,因為臉部特徵擷取是較耗時的,而臉部追蹤是較快且較可靠的。 當系統處於臉部追蹤時,臉部的狀態參數會在此時計算出來,包括了:單位時間內閉眼所佔的比例、眨眼頻率、閉眼的時間長度、凝視狀態、嘴巴張開的時間長度與頭部的轉動。 最後我們利用這些狀態參數來推算駕駛的疲倦程度,使用的是模糊積分,其可將各種不同的參數統合起來,推測出一疲倦程度。 我們對許多不同的駕駛者與照明程度作測試,結果顯示我們系統可以在白天正常的運作,未來將把系統的使用性延伸至晚上,對於這點我們將納入紅外線攝影機來幫助系統的實現。

並列摘要


Many traffic accidents have been reported due to driver’s drowsiness/fatigue. Drowsiness degrades driving performance due to the declinations of visibility, situational awareness and decision-making capability. In this study, a vision-based drowsiness detection and warning system is presented, which attempts to bring to the attention of a driver to his/her own potential drowsiness. The information provided by the system can also be utilized by adaptive systems to manage noncritical operations, such as starting a ventilator, spreading fragrance, turning on a radio, and providing entertainment options. In high drowsiness situation, the system may initiate navigation aids and alert others to the drowsiness of the driver. The system estimates the fatigue level of a driver based on his/her facial images acquired by a video camera mounted in the front of the vehicle. There are five major steps involved in the system process: preprocessing, facial feature extraction, face tracking, parameter estimation, and reasoning. In the preprocessing step, the input image is sub-sampled for reducing the image size and in turn the processing time. A lighting compensation process is next applied to the reduced image in order to remove the influences of ambient illumination variations. Afterwards, for each image pixel a number of chrominance values are calculated, which are to be used in the next step for detecting facial features. There are four sub-steps constituting the feature extraction step: skin detection, face localization, eyes and mouth detection, and feature confirmation. To begin, the skin areas are located in the image based on the chrominance values of pixels calculated in the previous step and a predefined skin model. We next search for the face region within the largest skin area. However, the detected face is typically imperfect. Facial feature detection within the imperfect face region is unreliable. We actually look for facial features throughout the entire image. As to the face region, it will later be used to confirm the detected facial features. Once facial features are located, they are tracked over the video sequence until they are missed detecting in a video image. At this moment, the facial feature detection process is revoked again. Although facial feature detection is time consuming, facial feature tracking is fast and reliable. During facial feature tracking, parameters of facial expression, including percentage of eye closure over time, eye blinking frequency, durations of eye closure, gaze and mouth opening, as well as head orientation, are estimated. The estimated parameters are then utilized in the reasoning step to determine the driver’s drowsiness level. A fuzzy integral technique is employed, which integrates various types of parameter values to arrive at a decision about the drowsiness level of the driver. A number of video sequences of different drivers and illumination conditions have been tested. The results revealed that our system can work reasonably in daytime. We may extend the system in the future work to apply in nighttime. For this, infrared sensors should be included.

參考文獻


[Ada00] Y. Adachi, A. Imai, M. Ozaki, and N. Ishii, “Extraction of Face Region by Using Characteristics of Color Space and Detection of Face Direction through an Eigenspace,” Proc. of 4th Int’l Conf. on Knowledge-Based Intelligent Engineering Systems and Allied Technologies, vol. 1, pp. 393-396, Aug. 2000.
[Can02] J. L. Cantero, M. Atienza, and R. M. Salas, “Human Alpha Oscillations in Wakefulness, Drowsiness Period, and REM Sleep: Different Electroencephalographic Phenomena within the Alpha Band,” Neurophysiol Clin, 32, pp. 54–71, 2002.
[Cap07] O. Cappe, S. J. Godsill, and E. moulines, “An Overview of Existing Methods and Recent Advances in Sequential Monte Carlo,” Proceedings of the IEEE, vol. 95, no. 5, pp. 899- 924, May 2007.
[Che03] D. S. Chen and Z. K. Liu, “A Novel Approach to Detect and Correct Highlighted Face Region in Color Image,” Proc. Of IEEE Conf. on Advanced Video and Signal Based Surveillance, pp. 7-12, Jul. 2003.
[Dub82] D. Dubois and H. Prade, “Integration of Fuzzy Mappings: Part 2; Integration of Fuzzy Intervals: Part 3,” Fuzzy Sets Systems 8, pp. 105-116, 225-233, 1982.

被引用紀錄


林漢威(2009)。以表情辨識為基礎之嬰兒意外監控系統〔碩士論文,國立臺灣師範大學〕。華藝線上圖書館。https://www.airitilibrary.com/Article/Detail?DocID=U0021-1610201315161643
黃亭凱(2009)。智慧型停車場系統-車輛追蹤子系統〔碩士論文,國立臺灣師範大學〕。華藝線上圖書館。https://www.airitilibrary.com/Article/Detail?DocID=U0021-1610201315162300

延伸閱讀