透過您的圖書館登入
IP:18.117.70.132
  • 學位論文

以隱含形狀模型為基礎之靜態手勢辨識

Static Hand Posture Recognition Based on an Implicit Shape Model

指導教授 : 顏嗣鈞
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


手勢辨識在人機互動領域是相當熱門的研究主題,因為手勢是人類自然的溝通方式。先前的研究主要是專注在固定尺寸的影像上,評估是否符合某手勢的特徵,而常見方法是使用機器學習的AdaBoost演算法尋找手勢的重要特徵。近年來,局部特徵演算法逐漸受到重視,因為具備許多重要性質的強健性,例如亮度、尺度、方向等等。因此本論文改進了以局部特徵為基礎的隱含形狀模型方法,並使用此方法來解決靜態手勢辨識問題。我們發現精確度相較於先前文獻方法增進,並且我們的方法具有偵測手勢的方向,以及可辨識不同角度的手勢等特點。最後,本論文採用的演算法執行時間近乎即時,可用於一般的靜態手勢辨識應用,或是作為動態手勢的基礎。

並列摘要


Hand gesture recognition has become increasingly popular in Human-Computer Interaction (HCI) research as gestures provide a natural way of communication. Previous research has focused on searching a fixed size sub-window by evaluating a subspace of feature space that is found from machine learning algorithms such as AdaBoost. In recent years, however, local features have become increasingly popular as they offer robustness in illumination of the environment, scale, and rotational invariance of the hand itself. In this thesis, we describe a novel method of static hand posture recognition that is based on an Implicit Shape Model (ISM) of local features. We find improvement in recognition accuracy over former methods. In addition, our algorithm enhances the sliding-window paradigm by providing useful information such as hand orientation and rotational invariance. The execution time of the algorithm is also provided in order to assess its potential to be incorporated into a near real-time posture recognition application or a hand gesture system module.

參考文獻


[1] P. Mistry, P. Maes, and L. Chang, “WUW - wear Ur world: a wearable gestural interface,” Proceedings of the 27th international conference extended abstracts on Human factors in computing systems, Boston, MA, USA: ACM, 2009, pp. 4111-4116.
[3] Q. Chen, N. Georganas, and E. Petriu, “Hand gesture recognition using Haar-like features and a stochastic context-free grammar,” IEEE Transactions on Instrumentation and Measurement, vol. 57, 2008, pp. 1562–1571.
[4] Y. Chen and K. Tseng, “Multiple-angle hand gesture recognition by fusing svm classifiers,” IEEE International Conference on Automation Science and Engineering, 2007. CASE 2007, 2007, pp. 527–530.
[6] J. Sivic and A. Zisserman, “Video Google: A text retrieval approach to object matching in videos,” Ninth IEEE international conference on computer vision, 2003. Proceedings, 2003, pp. 1470–1477.
[7] H. Zhou, D. Lin, and T. Huang, “Static hand posture recognition based on okapi-chamfer matching,” Real-Time Vision for Human-Computer Interaction, 2005, pp. 85–101.

被引用紀錄


陳品翰(2011)。基於自適應性膚色偵測與輪廓匹配之即時性手勢辨識〔碩士論文,國立臺灣大學〕。華藝線上圖書館。https://doi.org/10.6342/NTU.2011.01469

延伸閱讀