透過您的圖書館登入
IP:3.138.138.144
  • 學位論文

基於多模情緒辨識之人機互動

Multi-Modal Emotion Recognition for Human-Robot Interaction

指導教授 : 黃漢邦
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


隨著機器人領域的蓬勃發展,許多產業已經引入機器人協助人類工作,未來機器人與人類共存將會是重要課題。然而,在工業環境下機器人的指令與環境複雜性都較為簡單,倘若是在居家環境及公共場合,服務型機器人必須學習與理解人類的社會行為才能融入人類社會並給予協助。因此,為了達到機器人與人類之間的自然互動與和諧的共存,人機互動便是此發展重點。本論文致力於結合臉部表情、肢體動作、聲音進行多模情緒辨識,利用遞迴歸類神經網路及模糊積分進行學習並整合預測人類情緒來提高機器人對人類行為認知能力。情緒屬於高階認知並影響人類的行為與決策,我們提出的多模情緒辨識系統使機器人能夠穩健的理解人類情緒反應,機器人能夠擁有充足的資訊去因應環境變化,使用時間序列資料進行學習並動態調整進行決策,讓機器人不單只是擁有情緒辨識更有像人的認知能力。

並列摘要


Robotics has seen much development over the last few decades. The way in which robots live with humans has become an important issue. Robots need to understand human social cues and rules to correctly interact with human in home and public environments. Therefore, in order to a reach natural and harmonious interaction between humans and robots, the human-robot interaction is a key issue. This thesis integrates facial expression, body movement and speech tone to conduct multi-modal emotion recognition. We use recurrent neural network for learning model and fuzzy integral for multi-modal fusion to enhance the cognitive ability of robots to understand human behaviors and emotion. Emotion is a kind of high-level cognition which heavily affect humans’ behaviors and decision. We propose the multi-modal emotion recognition system allowing robots to predict emotion robustly. Multi-information provides more complete information for emotion recognition and deals with different environments. Each uni-model is trained by time sequence data and fused by dynamic adjustment. In terms of the proposed method, robots not only have the ability to predict emotion, but also have the human-like cognitive ability.

參考文獻


[6] P. S. Aleksic and A. K. Katsaggelos, "Automatic Facial Expression Recognition Using Facial Animation Parameters and Multistream Hmms," IEEE Transactions on Information Forensics and Security, Vol. 1, No. 1, pp. 3-11, 2006.
[7] M. Argyle, Bodily Communication, 2nd Edition, New York: Methuen & Co. Ltd, 1988.
[8] A. Aristidou, P. Charalambous, and Y. Chrysanthou, "Emotion Analysis and Classification: Understanding the Performers' Emotions Using the Lma Entities," Computer Graphics Forum, Vol. 34, No. 6, pp. 262-276, 2015.
[9] M. S. Bartlett, G. Littlewort, I. Fasel, and J. R. Movellan, "Real Time Face Detection and Facial Expression Recognition: Development and Applications to Human Computer Interaction," Proceeding of Conference on Computer Vision and Pattern Recognition Workshop, Madison, Wisconsin, USA, Vol. 5, pp. 53-53, 2003.
[10] J. Bekios-Calfa, J. M. Buenaposada, and L. Baumela, "Robust Gender Recognition by Exploiting Facial Attributes Dependencies," Pattern Recognition Letters, Vol. 36, pp. 228-234, Jan 15 2014.

延伸閱讀