透過您的圖書館登入
IP:3.140.188.16
  • 學位論文

Facial expression analysis with graphical models

Facial expression analysis with graphical models

若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

並列摘要


Facial expression recognition has become an active research topic in recent years due to its applications in human computer interfaces and data-driven animation. In this thesis, we focus on the problem of how to e?ectively use domain, temporal and categorical information of facial expressions to help computer understand human emotions. Over the past decades, many techniques (such as neural networks, Gaussian processes, support vector machines, etc.) have been applied to facial expression analysis. Recently graphical models have emerged as a general framework for applying probabilistic models. They provide a natural framework for describing the generative process of facial expressions. However, these models often su?er from too many latent variables or too complex model structures, which makes learning and inference di±cult. In this thesis, we will try to analyze the deformation of facial expression by introducing some recently developed graphical models (e.g. latent topic model) or improving the recognition ability of some already widely used models (e.g. HMM). In this thesis, we develop three di?erent graphical models with di?erent representational assumptions: categories being represented by prototypes, sets of exemplars and topics in between. Our ¯rst model incorporates exemplar-based representation into graphical models. To further improve computational e±- ciency of the proposed model, we build it in a local linear subspace constructed by principal component analysis. The second model is an extension of the recently developed topic model by introducing temporal and categorical information into Latent Dirichlet Allocation model. In our discriminative temporal topic model (DTTM), temporal information is integrated by placing an asymmetric Dirichlet prior over document-topic distributions. The discriminative ability is improved by a supervised term weighting scheme. We describe the resulting DTTM in detail and show how it can be applied to facial expression recognition. Our third model is a nonparametric discriminative variation of HMM. HMM can be viewed as a prototype model, and transition parameters act as the prototype for one category. To increase the discrimination ability of HMM at both class level and state level, we introduce linear interpolation with maximum entropy (LIME) and member- ship coe±cients to HMM. Furthermore, we present a general formula for output probability estimation, which provides a way to develop new HMM. Experimental results show that the performance of some existing HMMs can be improved by integrating the proposed nonparametric kernel method and parameters adaption formula. In conclusion, this thesis develops three di?erent graphical models by (i) combining exemplar-based model with graphical models, (ii) introducing temporal and categorical information into Latent Dirichlet Allocation (LDA) topic model, and (iii) increasing the discrimination ability of HMM at both hidden state level and class level.