透過您的圖書館登入
IP:18.220.121.27
  • 學位論文

基於光體積變化描述波形與皮膚電流活動生理訊號之情緒識別研究

Emotion Recognition Based on Physiological Signals of Photoplethysmographic Signals and Galvanic Skin Response

指導教授 : 李建誠

摘要


本研究透過少量的生理訊號種類,並開發更有效率的生理訊號分析方法,以實現生理訊號情緒識別。論文中使用光體積變化描述波形 (photoplethysmographic, PPG) 與皮膚電流活動 (galvanic skin responses, GSR) 兩種生理訊號進行情緒識別,並提出非對稱 multibandwidth mean-shift 極點搜尋法與迴歸差異曲點偵測法兩種方式,用以偵測 PPG 訊號中的收縮波峰、舒張波谷、降中峽 (dicrotic notch) 及重搏波峰 (dicrotic peak) 等特徵點;非對稱 multibandwidth mean-shift 極點搜尋法用於連續時間序列訊號之最大值與最小值的偵測,不僅突破 mean-shift 只能搜尋密度函數的模數 (mode) 限制外,並可將搜尋範圍延伸至後續時間之訊號;迴歸差異曲點偵測法利用線性迴歸 (linear regression) 概念偵測降中峽與重搏波峰,取代設計大量門閥值與小波轉換等繁瑣之處理。利用上述兩種分析方式以及多尺度熵 (multiscale entropy) 分析,從 PPG 與 GSR 兩種生理訊號擷取出分類特徵。 實驗中,分別對十位受測者進行七種情緒誘發 (neutral, love, joy, surprise, sadness, anger, fear),並從生理訊號中擷取出情緒特徵,再採用支援向量機 (support vector machine) 進行分類,利用受測者本身情緒特徵進行分類器的訓練與測試,平均辨識率可達到98%。

並列摘要


This paper presents a system for emotion recognition using two physiological signals, including photoplethysmographic (PPG) signals and galvanic skin response (GSR). We propose two novel methods for detecting the significant points in photoplethysmographic signals (diastolic trough, systolic peak, dicrotic notch, and dicrotic peak.) Firstly, the method named asymmetric multibandwidth mane-shift extremes seeking provides the ability for detecting maximum and minimum modes in time series signals. Secondly, the method named regression difference bendpoint detection provides a fast and simplified way for locating the dicrotic notch and dicrotic peak. In addition, multiscale entropy analysis is adopted to extract the features from GSR signals. Using fewer physiological signals and significant features with emotional responses are the main ideas in our recognition system. Ten subjects join this experiment and 29 features obtained from the two bio-signals with one person. Support vector machine was used for the classifications. The recognition rate achieved 98%.

參考文獻


2. Mu-Chun Su, Computational intelligence and Human-Computer Interaction Lab., http://cilab.csie.ncu.edu.tw/
3. W. Liao, W. Zhang, Z. Zhu, Qiang Ji, and Wayne Gray, “Toward a Decision-Theoretic Framework for Affect Recognition and User Assistance,” International Journal of Human-Computer Studies, vol. 64, pp. 847-873, 2006.
7. R. W. Picard, Affective Computing, Media Laboratory, MIT, 1995.
11. A. Lanitis, C. Taylor, and T. Cootes, “A unified approach to coding and interpreting face images,” in Proc. International Conf. on Computer Vision, pp. 368–373, 1995.
12. Y. Yacoob and L. Davis, “Recognizing human facial expressions from long image sequences using optical flow,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 18, no. 6, pp. 636–642, 1996.

延伸閱讀