透過您的圖書館登入
IP:18.117.186.92
  • 學位論文

基於情緒環與深層學習架構之使用者生成影片情緒辨識系統

Emotion Prediction from User-Generated Videos by Emotion Wheel Guided Deep Learning

指導教授 : 吳家麟

摘要


自動偵測一段影片中人們所表現出來的情緒,對於許多應用來說是 很有用的資訊。近期,隨著網路進步及社交媒體的興起,以及各種能 夠拍攝影片的設備,人們能夠輕易地在網路上分享自己所拍攝的影片。 相對於以往的情緒辨識著重於人臉分析,這類使用者產生影片在影片 的內容以及質量上有著極大的多樣性,提升了辨識的困難度以及穩固 性。為了解決這個問題,在我們所建構的系統中引入了深層卷積類神 經網路。深層卷積類神經網路最近在許多視覺辨認競賽上取得了相當 成功的成績,我們將它用作特徵的抽取工具。此外我們也引入情緒環 來對特徵的抽取流程進行改進,進一步提升深層卷積類神經網路特徵 的效能。我們在一個由 Youtube 和 Flickr 所收集的影片數據集上測試所提出的系統,辨識的準確率從先前的 46.1% 提升至 54.2%。

並列摘要


Predicting emotions in videos is important for many applications with the requirements of user reactions. Recently, the increasing web services on the Internet allow users to upload and share videos very conveniently. To build a robust system for predicting emotions in such user-generated videos is a quite challenging problem, due to the diversity of contents and high-level abstrac- tions of human emotions. Motivated by the success of Convolutional Neural Networks (CNN) in several visual competitions, it is a prospective solution to bridge this affective gap. In this paper, we propose a multimodal framework to predict emotions in user-generated videos based on CNN extracted fea- tures. Psychological emotion wheel is included to learn better representations as compare with its simply transfer learning counterpart. We also showed through experiments that traditional encoding methods for local features can help improve the prediction performance. Experiments conducted on a real- world dataset from Youtube and Flickr demonstrate that our proposed frame- work outperforms the previous related work, in terms of prediction accuracy rate, by 54.2% to 46.1%.

參考文獻


[17] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross
[15] Shengxin Zha, Florian Luisier, Walter Andrews, Nitish Srivastava, and Ruslan
[12] Lorenzo Torresani, Martin Szummer, and Andrew Fitzgibbon. Efficient object cat-
[9] Damian Borth, Rongrong Ji, Tao Chen, Thomas Breuel, and Shih-Fu Chang. Large-
[14] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma,

延伸閱讀