隨著人機介面技術的發展,過去裝設在身上的感測裝置藉以捕捉人體動作,至今已逐步轉為無標記型的感測技術。特別是微軟發表平價的Kinect感測裝置後,透過可見光與紅外線以正確擷取使用者的動作,正逐步廣泛應用在擴增實境、醫療復健、數位學習等領域中。動作是由一連串靜態姿勢所組成,在空間與時間上具有高度的變異性,如何對一連續動作做有效的認知自然是邁向人性化使用者介面的重要議題。此一研究即以RGB-D攝影機為基礎,探討連續動作的分析。為了穩定獲得視訊資料的姿勢類型,首先設計關連性特徵描述方法,接著利用動作解析來逐一比對已描述的連續姿勢。為了展現執行成效,實際開發RGB-D視訊分析測試平台以檢視其成效。
With the development of man-machine interface technology, the sensing device put on the body to capture body motions in the past has gradually turned into an unmarked type of sensing technology. Microsoft Kinect sensing device was published to capture the user's action via visible light and infrared radiation, that been widely used in several HMI fields such as augmented reality, medical rehabilitation, e-learning, etc. Action is composed of a series of static postures with a high degree of variability in space-time. To understand a continuous action effectively is naturally an important issue towards natural user interface. This research based on the RGB-D camera explores the analysis of continuous motions. In order to obtain video data of the postures stably, design a method of describing relational features, and then make use of action analysis to match the continuous postures that have been described one by one. In order to show the effectiveness of the implementation, we actually develop the test platform of RGB-D video analysis.