透過您的圖書館登入
IP:18.217.220.114
  • 學位論文

以隨機森林架構結合臉部動作單元辨識之臉部表情分類技術

Using Random Forest Combined with Action Unit Recognition for Facial Expression Classification

指導教授 : 黃仲陵 鐘太郎

摘要


影像處理研究領域中,表情分析辨識一直是具挑戰性的研究議題。表情在人與人的溝通中扮演著很重要的角色,最主要是用來傳遞一些情感上面的訊息,是除了語言之外最重要的溝通方式之一。 表情辨識的難度在於每個人的五官長相和個性都不相同,因此表情的表現方式也會有所差異,這些差異會使得臉部因表情產生較細微的變化,而使分析起來更加複雜。表情基本上是屬於一種臉部連續的變化,所以過去的人把表情分割成四個不同的階段去分析。這四個階段分別是,(1)Neutral、(2)Onset、(2)Apex、(4)Offset。表情變化狀態的順序為,Neutral-> Onset-> Apex-> Offset-> Neutral。 在本研究中,我們採取針對靜態影像做處理的做法,主要是對單張影像的人臉做辨識,只分析人臉部表情四個狀態中Apex時的狀態。我們以Gabor當作訓練特徵,第一階段辨識出數個臉部細微部份的特徵,稱為動作元件(Action Units (AUs) ),再利用這些AU當作辨識表情的特徵,不同的表情會由不同AU的組合來明顯表達,透過隨機森林做訓練,最終辨識六種典型的表情(高興、生氣、難過、驚訝、厭惡、害怕)。

並列摘要


Facial expression recognition has been one of the most challenging researches in computer vision. Second to verbal language, facial expression is another way of body language communication. The technical difficulty of facial expression recognition lies in that every individual has a unique way of facial expression for his emotion. Even for the same person, there exists a very slight difference of facial expression for the same emotion. Facial expression results from a continuous series of physical changes of the facial muscle. These facial muscle changes are usually divided into four phases: Neutral, Onset, Apex, Offset, and Neutral. In this thesis, we apply image processing technique to recognize the Apex phases of human faces. We use Gabor filter to obtain the facial features, and then identify several minute features on the face. These detailed features serve as Action Units (AUs), of which their different combinations represent different expression. To recognize different facial expression, we find different combinations of AUs through Random Forest training to classify six typical facial expressions: anger, disgust, fear, happiness, sadness, and surprise.

參考文獻


[28] C. C. Chang and C. J. Lin, “LIBSVM : A library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, pp. 27, 2011.
[1] P. Ekman and W.V. Friesen, “Constants across cultures in the face and emotion,” Journal of Personality and Social Psychology, vol. 17, pp. 124-129, 1971.
[3] R. W. Picard, “Affective Computing,” The MIT Press, 1997.
[4] T. Kanade, J. Cohn and Y. Tian, “Comprehensive database for facial expression analysis,” IEEE Proceedings of the Fourth International Conference on Automatic Face and Gesture Recognition, Grenoble, France, 2000.
[5] M. F. Valstar and M. Pantic, “Biologically vs. Logic Inspired Encoding of Facial Actions and Emotions in Video,” IEEE Inter. Conference on Multimedia and Expo, pp. 325—328, 2006.

延伸閱讀