透過您的圖書館登入
IP:13.58.140.33
  • 學位論文

基於視覺之手勢通用控制界面

A Generic Framework for the Design of Visual-based Gesture Control Interface

指導教授 : 陳文輝
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


目前基於電腦視覺的手勢控制系統大多侷限於特定的應用程式,使用者總是被迫去熟悉事先定義好的手勢,並尋求手勢與指令間的關聯,但若是遇到另一種應用程式卻又是另一種操控方式,其彈性不大。因此我們發展出一套通用手勢控制介面,可以有效整合其他應用程式。本系統主要分為兩個主要的技術模組:手型偵測模組、手勢辨識模組。首先,手型偵測模組透過視訊裝置擷取手部影像,並以矩形特徵作為物件分類用的特徵,進而採用adaboost學習演算法訓練好的手型偵測器進行特定手型的偵測。此後,手勢辨識模組根據上階段偵測之連續手型,判斷連續手型間的關聯性,並以有限狀態機來檢測其語意之合法性,最後將合法之語意轉換成應用程式所能接受之控制指令。經實驗與測試之後,證實使用adaboost學習演算法與矩形特徵能快速且準確偵測已定義之手型,並設計一個手勢指令設定視窗介面,讓使用者可以自行定義手勢與輸入裝置(滑鼠、鍵盤)間的映射關係,可根據不同應用程式有彈性地調整,測試結果證實,該系統確實能達到單一手勢介面適應多種應用程式之研究目的。

並列摘要


Due to the limitations of the current visual-based gesture systems that only bound to several specific applications, most users are indirectly forced to be familiarized with the predefined postures in order to issue a command. Thus, we developed a generic framework of visual-based gesture control which enables the integration of other applications effectively. This system is composed of two main modules: hand posture detection module and gestures recognition module. First started with the detection module, it is employed to process the captured images from a low cost webcam for the segmentation of hand region from background images. The adaboost algorithm with a set of Haar-like features is adopted to train a hand detector for achieving real time performance and high recognition rate. Then, the recognition module use the detected hand postures in detection stage as input data and feed into the designed finite state machine program to further analyze their corresponding hand commands. After having conducted several careful experiments and tests, we adopted adaboost learning algorithm with Haar-like features to train a set of simple classifiers for obtaining both fast and accurate system performance. The developed system provides a interface make the user to define the gesture-command mapping they prefer.

參考文獻


[1] Ying Wu and T. S. Huang, “Hand modeling, analysis and recognition,” IEEE Signal Processing, 2001, pp. 51-60.
[2] H. Zhou, L. Xie, X. Fang, “Visual Mouse: SIFT Detection and PCA Recognition,” Computational Intelligence and Security Workshops, 2007, pp.263-266.
[3] J.L. Tu, T. Huang, and H. Tao, “Face as Mouse Through Visual Face Tracking,” IEEE Computer and Robot Vision, 2005, pp. 339-346.
[4] T. Kocejko, A. Bujnowski, and J. Wtorek, “Complex Human Computer Interface for LAS Patient, ” Human System Interactions, 2009, pp.272-275.
[5] T. Palleja, E. Rubion, M. Teixido, M. Tresanchez, A. Fernandez del Viso, C. Rebate, J. Palacion, “Simple and Robust Implementation of a Relative Virtual Mouse Controlled by Head Movements, ” IEEE Conference on Human System Interaction, 2008, pp 221-224.

延伸閱讀