我們藉由快速比對影像中特徵點的機制,建立了一個即時物體追蹤系統。 為了能夠達成即時追蹤的效能,我們的採用了 ferns 架構來建立分類器。所謂的 ferns 架構,為 randomized trees 架構的改良,不但簡化了分類器的訓練過程,而且分類的準確度更高。實驗過程可以分為兩個階段,分別是離線訓練階段,以及即時追蹤階段。在離線訓練過程中,我們利用由不同種類轉換矩陣所創造出來的訓練資料,分別訓練 independent-feature ferns 分類器及 joint-feature ferns 分類器。我們使用 Prim’s 演算法在特徵點上面建立了一個最小生成樹,並且搭配我們提出的 joint ferns 分類器,來描述鄰近特徵點之間的空間關係,因此能增進特徵點辨識的精確度。在即時追蹤階段,對於每一個新輸入的視訊畫面,我們先偵測特徵點,並試著重建出最小生成樹。為了處理遮蔽的問題,我們允許只重建出最小生成樹的部份集合;如此一來,藉由結合 independent-feature ferns 特徵辨識以及 joint-feature ferns 特徵辨識,我們便能有效率的增加整體的辨識準確度。 為了加速原本頗為耗時的訓練過程,我們也提出了一個新方法,利用 large feature sets 的概念,來避免過多的影像轉換運算。 藉由這個新的訓練方式,訓練 ferns 分類器所需要的時間便可以降低至原本的百分之十左右,因而大幅提升了訓練的效率。
This thesis describes our exploration of constructing a visual tracking system based on fast keypoint matching. To achieve real-time performance, the ferns architecture, which is an improvement of the randomized trees architecture and is simpler and faster in training, testing, and implementing, is adapted in our thesis for building classifiers. We separate the experiment into two phases: i) off-line training phase and ii) tracking phase. In the off-line training phase, we train independent-feature and joint-feature classifiers separately according to their infinite training sets which are created from different kinds of transformation matrices. To enhance the recognition accuracy, we incorporate the co-occurrence information of keypoints and use a graph-based representation to model the spatial relations between the keypoints. More specifically, we build a minimum spanning tree on the keypoints using Prim’s algorithm. We introduce the joint-feature ferns classifiers that take account of the spatial relations between keypoints and thus improve the accuracy of keypoint recognition. Then, we sequentially rebuild the minimum spanning tree on the new frame. As for the occlusion problem, we try to rebuild partial sets of minimum spanning tree. As a result, we can increase the overall recognition accuracy efficiently by combining independent feature recognition with joint feature recognition. Furthermore, to speed up the time-consuming off-line training process, we present a novel scheme called large-feature-set training to avoid the need of intensive image transformations. The time required for training ferns classifiers can be reduced to about 10%.