Title

遠距教學情境中自然學習表情之判讀與分析

Translated Titles

The Judgment and Analysis of Spontaneous Emotion on Distance Learning

DOI

10.6845/NCHU.2012.00370

Authors

陳思如

Key Words

情意計算 ; 免疫記憶克隆演算法 ; 支持向量機 ; 同步遠距教學 ; 面部特徵 ; Affective computing ; support vector machine ; synchronous distance learning ; facial expressions

PublicationName

中興大學資訊管理學系所學位論文

Volume or Term/Year and Month of Publication

2012年

Academic Degree Category

碩士

Advisor

林冠成

Content Language

繁體中文

Chinese Abstract

遠距教學作為新興的教學模式,在傳統教室中增加了資訊技術與電子器材,如電子白板、教室反應系統等各種資訊應用方式逐漸成熟。隨著教學方式的多樣化,遠距教學也有其限制,學習者在遠端電腦或設備接受教學的過程裡,無法如面對面教學一般,保持積極的學習態度。由於傳統的電子器材無法針對不同的學習狀態做出回應,當學習者面對機器時,較面授更容易失去專注力,教師也無法得知學習者的受教程度,課程的安排將難以反應學習者真實的進度。 因應於此,近來將情意計算引入教學模式,讓機器能偵測學習者的內在情緒,或者更甚,能對學習者的學習狀態做出回應,強化人機互動,令學習者在學習的過程中保持積極態度。但目前的情意計算多指向判斷學習者的專注程度,或者分析學習者的喜怒哀樂等六種心理學上的基本表情,但這樣的表情分類並不適合於學習的過程中所產生的情緒。如能取得學習者對於課程的理解程度,回饋給教師,幫助教師改善教學方式,同時增進師生的互動。 本研究藉由情意計算的方法,分析學習者在遠距教學情境下的面部表情自然變化,判讀學生在遠距教學中產生的學習情緒,分為理解與不理解兩種類別,將理解值做為教學回饋,期能藉此改進教學方案。為了達成自然表情的蒐集,本研究設計了數種遠距教學情境,藉由實際進行不同類型的遠距教學方式,包含同步與非同步教學,本研究觀察證實,同步遠距教學實地授課下,學習者的臉部表情可被人眼解讀是否理解的情緒,可作為實際的情意計算分類。 研究中應用免疫記憶克隆演算法實現特徵選擇,使用支持向量機進行分類,結合特徵選擇與分類,建構學習情緒分類模型,輸入已標記分類之情境影像執行訓練與預測。實驗對照JAFFE資料庫以檢驗分類模型,JAFFE蒐集了日本女性的表情,取該資料庫標記之五種情緒作為對照,分別為厭惡、喜悅、悲傷、憤怒與驚訝的表情。將JAFFE之資料分類歸屬改為二元值以降低變因,針對每種表情套用訓練與預測之模型建構方法,因應產生五種分類模型。模型判讀共213張影像的五種表情,五個情緒模型之平均準確度為98.78%,顯示該模型可判讀不同人物的基本表情。而此模型建構方法使用本實驗共495張理解情緒影像產生的分類模型之平均準確度為85.3%。實驗證明情意計算可用於辨識學習者自然表情的理解程度。

English Abstract

In recent years, under rapid developments in Information and Communication Technology, distance learning is emerging as a new teaching model. In the domain of distance learning, how to get learners' learning status in an isolated environment without face-to-face communication has become an important topic. Most studies focus on analyzing the students’ attention degree in the class, in order to attract students’ attention and make them focus on class. The subject of this study is further analysis of students’ cognitive status, and feedback to teacher to imply appropriate teaching method. The quick response of teacher can drive students’ active learning attitude, and the different cognitive status corresponds to different teaching programs could achieve the purpose of individualized teaching. This paper provides a learning state recognition model, analyze a learner's spontaneous expressions in the synchronous distance learning environment, classified the learner's learning state into two understanding labels: 0 and 1, and presented to the teacher to adjusting teaching programs. There are 66 feature points detected in the face detection phase, those points transformed to 66 eigenvalues, passed to feature selection algorithm: Immune Memory Clone Feature Selection (IMCFS) combines Support Vector Machine (SVM), and then get the classification of learning expressions with 85.3% accuracy under 10-fold cross-validation via 495 records. This study attempts to put learners’ facial expressions through the affective computing process, after classified by computer, the teacher could know the learners’ learning status, and use the information to progress teaching mechanism, in order to achieve the goal of adaptive teaching.

Topic Category 管理學院 > 資訊管理學系所
社會科學 > 管理學
Reference
  1. [3] 潘奕安,“低解析度影像序列之自動化表情辨識系統”,碩士論文,資訊工程學系,國立成功大學,台南市,中華民國,2004。
    連結:
  2. [8] 中華民國國家高速網路與計算中心。(2012,4月4日)。Colife線上會議系統 [Online]。擷取自:http://meeting.colife.org.tw/index.aspx。
    連結:
  3. [11] R. W. Picard, Affective Computing. London, UK: MIT Press, 1997.
    連結:
  4. [13] T. M. Cover and J. A. Thomas, Elements of Information Theory. New York, USA: John Wiley & Sons, 1991.
    連結:
  5. [18] F. Tsalakanidou and S. Malassiotis, “Real-time 2D+3D facial action and expression recognition,” Pattern Recognition, vol. 43, no. 5, pp. 1763-1775, May, 2010.
    連結:
  6. [19] G. Yang and T. S. Huang, “Human Face Detection in Complex Background,” Pattern Recognition, vol. 27, pp. 53-63, Jan., 1994.
    連結:
  7. [20] Z. Zeng et al., “A Survey of Affect Recognition Methods: Audio, Visual, and Spontaneous Expressions,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 1, pp. 39-58, 2009.
    連結:
  8. [21] R. Brunelli and T. Poggio, “Face recognition: Features versus templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15, pp. 1042-1052, 1993.
    連結:
  9. [23] Ira Cohen et al., “Facial expression recognition from video sequences: temporal and static modeling,” Computer Vision and Image Understanding, vol. 91, no. 1-2, pp. 160-187, 2003.
    連結:
  10. [24] M. Turk and A. Pentland, “Eigenfaces for Recognition,” Jour. of Cognitive Neuroscience, vol. 3, pp. 71-86, 1991.
    連結:
  11. [25] M. L. Markus, “Electronic mail as the medium of managerial choice”, Organization Science, vol. 5(4), pp. 502–527, 1994.
    連結:
  12. [28] R. W. Picard, “Emotion research by the people, for the people,” Emotion Review, vol. 2, no. 3, pp. 250-254, 2010.
    連結:
  13. [29] B. T. Lau, “Portable real time emotion detection system for the disabled,” Expert Systems with Applications, vol. 37, no. 9, pp. 6561-6566, 2010.
    連結:
  14. [30] C. C. Chang and C. J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 3, pp. 1-27, Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm, 2011.
    連結:
  15. [31] H. A. Elfenbein1 and N. Ambady, “Universals and cultural differences in understanding emotions”, Current Directions in Psychological Science, vol. 12, no. 5, pp. 159-164, 2003.
    連結:
  16. [34] J. F. Cohn et al., “Automatic analysis and recognition of brow actions and head motion in spontaneous facial behavior,” Proc. 2004 IEEE Int. Conf. on Systems, Man and Cybernetics, vol. 1, 2004, pp. 610-616.
    連結:
  17. [35] K. C. Lin et al., “The classroom response system based on affective computing,” Proc. 3rd IEEE Int. Conf. on Ubi-media Computing (U-Media), 2010, pp. 190-197.
    連結:
  18. [38] A. Ray and A. Chakrabarti, “Design and Implementation of Affective E-learning Strategy Based on Facial Emotion Recognition,” Proc. Int. Conf. on Information Systems Design and Intelligent Applications (INDIA 2012), vol. 132, 2012, pp. 613-622.
    連結:
  19. 參考書目
  20. [1] 王志良、祝長生,人工情感。北京,中華人民共和國:機械工業出版社,頁153-177,2009。
  21. [2] P. Ekman, Emotions Revealed. London, UK: Weidenfeld & Nicolson, 2003(翻譯:楊旭,情緒的解析。三河市,中華人民共和國:南海出版公司,2008).
  22. [4] 蘇信宏,“數位學習情意偵測專心程度之影像處理”,碩士論文,機電整合研究所,北臺灣科學技術學院,台北市,中華民國,2007。
  23. [5] 林冠成等,“植基於情意計算之教室即時反應系統”,第四屆資訊系統發展專題,資訊管理學系,國立中興大學,2009。
  24. [6] 陳育民、彭剛毅,“Emotion Spectacle–結合情意運算於眼鏡設計之互動設計研究”,亞東學報,第27期,頁125-132,6月,2007。
  25. [7] 朱虎明、焦李成,“基於免疫記憶克隆的特徵選擇”,西安交通大學學報,第42卷,第6期,頁679-683,2008。
  26. [9] 星火視頻教程。(2010,12月14日)。高中三角函數教學視頻系列 [Online]。擷取自:http://www.21edu8.com/highschool/high3/19668/。
  27. [10] 李政軒。(2010,12月14日)。Hard-Margin Support Vector Machines [Online]。擷取自:http://www.powercam.cc/slide/6555。
  28. [12] L. R. Gay and P. W. Airasian, Educational Research: Competencies for Analysis and Applications, 7th ed. Upper Saddle River, New Jersey, USA: Merrill/Prentice Hall, 2003.
  29. [14] M. Hasegawa and Y. Nasu, “A Method of Face Images by Using Multiplex Resolution Image,” Institute of Electronics, Information and Communication Engineers, Japan, Tech. Rep. PRU 89-26, pp. 57-60, 1989.
  30. [15] Y. Li, “Towards A Systematic Pedagogy-Oriented Model of CRS Research: Efficacy of Classroom Response System-Facilitated Peer Instruction in Psychology Lecture Classes,” M. S. thesis, Concordia University, Quebec, Canada, 2011.
  31. [16] J. S. Kissinger, “A Collective Case Study of Mobile E-Book Learning Experiences,” Ph. D. dissertation, Education and Human Services, University of North Florida, USA, 2011.
  32. [17] Y. Y. Lin, “The Study of Learning Effects and Attitude of Using Interactive Whiteboard into Angle Unit of Elementary Mathematics for Fourth Graders with Different Academic Achievements,” M. S. thesis, Dept. of E-Learning, National Pingtung University of Education, Taiwan, 2011.
  33. [22] A. C, M K Venkatesha and B S. Adiga, “A survey on facial expression databases”, Int. Journal of Engineering Science and Technology (IJEST), vol. 2(10), pp. 5158-5174, 2010.
  34. [26] L. Shen, M. Wang, and R. Shen, “Affective e-learning: using ‘emotional’ data to improve learning in pervasive learning environment,” Educational Technology & Society, vol. 12, no. 2, pp. 176-189, 2009.
  35. [27] C. Akbiyik, “Can affective computing lead to more effective use of ICT in education?” Revista de Educacion, vol. 352, pp. 179-202, 2010.
  36. [32] P. S. Inventado et al., “Predicting student’s appraisal of feedback in an ITS using previous affective states and continuous affect labels from EEG data,” Proc. 18th Int. Conf. on Computers in Education (ICCE 2010), Putrajaya, Malaysia, 2010, pp. 71-75.
  37. [33] A. V. Nefian and M. H. Hayes, III, “Hidden Markov Models for Face Recognition,” Pros. 1998 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, vol. 5, pp. 2721-2724, 1998.
  38. [36] M. J. Lyons et al., “Coding Facial Expressions with Gabor Wavelets,” Proc. 3rd IEEE Int. Conf. on Automatic Face and Gesture Recognition, 1998, pp. 200-205.
  39. [37] T. Zhang et al., “Children's emotion recognition in an intelligent tutoring scenario,” Proc. 8th European Conf. on Spoken Language Processing, Jeju Island, Korea, 2004, pp. 1441-1444.
  40. [39] Luxand Inc. (2011, Dec. 30). Detect and Recognize Faces with Luxand FaceSDK [Online]. Available: http://www.luxand.com/facesdk/
  41. [40] P. Ekman. (2012, May 18). Micro Expressions [Online]. Available: https://face.paulekman.com
Times Cited
  1. 曾華薇(2013)。結合情意計算與動畫繪本教學應用於幼兒情緒教育之研究。中興大學資訊管理學系所學位論文。2013。1-58。
  2. 蘇莉鈞(2017)。利用臉部表情特徵辨識學習情緒之研究。中興大學資訊管理學系所學位論文。2017。1-61。