透過您的圖書館登入
IP:18.118.171.20
  • 學位論文

運用視訊之自動人體姿勢評估及行為比對系統

Automatic Human Pose Estimation and Behavior Matching System Using Video Sequences

指導教授 : 張元翔

摘要


本研究探究人體動作隱含的意義,運用人類行為識別技術評估人類的肢體語言或特定行為,並提供量化之數據及有效資訊,直接或間接協助人類學習或矯正種種姿勢或動作。研究中開發一套運用視訊之自動人體姿勢評估及行為比對系統,實現於基本的視訊攝影環境及自動化的操控需求,提供判斷視訊內待測對象的姿勢與行為間差異的參考,其流程包含前處理、姿勢評估、及行為比對等。本研究主要技術包含兩部份,分別為(1)人體骨架的曲線化;及(2)姿勢與行為的差異評估。系統最終可針對視訊進行單一姿勢差異及整體行為差異的評估,實驗上以多組姿勢進行評估,皆能提供姿勢間差異的參考數據,其中高相似度之姿勢的相似值為90%以上,而以兩組標準行為視訊與四組測試行為視訊比對後之平均分數為92.7%,並能有效呈現畫格對應關係。我們期望將系統發展為初學者學習新行為的輔助教學系統,或應用在運動員訓練及姿勢矯正的評估系統,根據系統輸出結果判斷行為姿勢標準與否,以降低運動傷害。

並列摘要


Human behavior analysis is an emerging research subject in the field of computer vision and pattern recognition. In this paper, we propose an “automatic human pose estimation and behavior matching system” using video sequences. Our system design can be divided into three phases: (1)Preprocessing that detects human object silhouette and extracts silhouette descriptors; (2)Pose estimation that quantitatively characterizes and localizes human limbs (called “skeleton curves” ) in each frame; and(3) Behavior matching that determines the similarity between a standard and a testing behavior using the whole video sequences. Results were demonstrated with preliminary success using the “hand exercise behavior” and “limb exercise behavior” . In summary, our system could ultimately be used as an auxiliary tutoring system that helps beginners learn new behaviors, or as an athletes’training system that helps athletes rectify their poses and movements, leading to reduce sport injury.

參考文獻


[1] D. M. Gavrila, “The visual analysis of human movement: A survey,” Computer Vision Image Understanding, vol. 73, no. 1, pp. 82–98, Jan. 1999.
[2] A. Agarwal and B. Triggs, “Recovering 3D human pose from monocular images,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 1, pp. 44–58, Jan. 2006.
[3] A. Elgammal and C. Lee, “Inferring 3D body pose from silhouettes using activity manifold learning,” in Proc. IEEE Comp. Vision Pattern Recogn. Conf. (CVPR), Washington, D.C., 2004, pp. 681–688.
[4] J. W. Davis and S. Vaks, “A perceptual user interface for recognizing head gesture acknowledgements,” in Proc. ACM Perceptive User Interfaces Conf., Orlando, Florida, 2001, pp. 1–7.
[5] G. C. De Silva, M. J. Lyons, S. Kawato, and N. Tetsutani, “Human factors evaluation of a vision-based facial gesture interface,” Computer Vision and Pattern Recognition Workshop, Jun. 2003.

延伸閱讀