透過您的圖書館登入
IP:18.119.0.207
  • 學位論文

利用時空光流比對來做動作辨識

Action Recognition Using Space-Time Optical-flow Matching

指導教授 : 貝蘇章

摘要


視訊在今日已是隨手可得。自動化理解視訊的關鍵之一在於分析其中的動作。在過去的二十年間,有許多相關這個主題的研究。 首先,關於回顧前人研究中關於動作分析的一般步驟,包含動作偵測、動作分段、物體分類。接著,提到一些與動作分析相關的應用,例如視訊監控及個人辨識。整理並呈現與這些應用相關的處理技巧,諸如物體追蹤與行為了解。 然後,引入一個可以直接了解視訊中蘊含行為的方法。我們集中在使用光流以達成在視訊中辨識動作的目標。這個方法運用時間及畫面所成的體積找出待測視訊間的關係。所有的處理從最小的基本單位開始,這個基本單位稱為時空小塊。藉由釐清時空小塊之間的種種性質,可以找出由時空小塊構成的大型待測視訊間的關係。實驗結果證明這個方法是有效的。其基本概念既直接且簡單,而且不受待測視訊中動作物體的外觀影響,但是在進行實驗時必須詳加考慮此法的高運算複雜度。

並列摘要


Videos are all around us. One of the key to understand a video automatically is to analyze to motions within it. In the past two decades, researches have been conducted against this topic. First, the general procedure of motion analysis is reviewed in the related work, including motion detection, motion segmentation, and object classification. Then, some applications of motion analysis are mentioned, such as video surveillance and personal identification. Techniques related to these applications are summarized, such as object tracking and behavior understanding. Then, a direct way to understand the behavior of a video is introduced. We focus on this optical-flow based approach to recognize action in video sequences. This method uses space-time volumes to perform correlation between templates. All the process of this approach start with the basic unit, space-time patches. With the well developed properties of these space-time patches, the correlation between two large templates can be practiced. Some results from experiments have proved the validity of this method. The concept of this method is direct and simple, and it is irrelevant to the appearance of the moving object, but the implementation of experiments should be designed with the consideration of high computation complexity.

並列關鍵字

Action Recognition

參考文獻


vision based surveillance system for Californaia roads,” Univ. of California,
[1] W. Hu, T. Tan, L. Wang, and S. Maybank, “A Survey on Visual Surveillance of Object Motion and Behaviors,” in IEEE Transactions on Systems, Man , and Cybernetics-Part C: Applications and Reviews, Vol. 34, No. 3, August 2004, pp. 334-352.
[2] C. Stauffer and W. Grimson, “ Adaptive background mixture models for real-time tracking,” in Proc. IEEE Conf. Computer Vision and Pattern recognition, vol. 2, 1999, pp. 246-252.
[3] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, ”Wallflower: principles and practice of background maintenance,” in Proc. Int. Conf. Computer Vision, 1999, pp. 255-261.
[4] I. Haritaoglu, D. Harwood, and L. S. Davis, “W4:Real-time surveillance of people and their activities,” IEEE Trans. Pattern Anal. Machine Intell., vol. 22, pp. 809-830, Aug. 2000

延伸閱讀