透過您的圖書館登入
IP:3.145.17.46
  • 學位論文

使用混合式碼簿擷取物件進行單張影像下的人臉驗證與Boost K-SVD字典下的車輛驗證

Object Extraction using Hybrid Codebook for SSPP-based Face Identification and Boost KSVD-based Vehicle Verification

指導教授 : 黃仲陵 鐘太郎

摘要


這篇論文主要研究三大主題:物件擷取、臉部辨識與車輛驗證。首先,物件擷取方法採用以模組為基礎的背景去除。不同於以前的方法,我們使用混合式碼簿的背景去除方法,該方法結合混合高斯與碼簿模型。所以我們對於具有陰影雨亮影的動態背景,提出一種橢圓求型式的碼簿,且設計一種修正式的陰影/亮影去除方法克服光影變化。我們提出的方法避免擷取到錯誤前景像素,(例:暗色背景容易受光影變化成為前景),或是遺失真正的前景像素。最後我們在CVPR 2011年的變化偵測的標竿資料集上,用兩個實驗比較我們的方法與其他的方法。 再來我們提出一個基於外觀的流型臉部辨識。大部分這類的方法使用多張同一個人的照片進行訓練,然而我們通常沒有辦法針對每個人獲得足夠的訓練樣本,這類型的方法會因為不足的訓練樣本而無法有良好的辨識結果。因此,我們應用了可鑑別流型分析(DMMA)方法並提出了加速的方式,我們的快速DMMA方法分為三部分。首先我們將多個人訓練樣本集合起來,在每個人只有一個樣本的狀況下,使用修改型K均值分群法,把集合分成兩群。其二、這兩群的每個樣本,會被切割成無交疊的區塊,每一區塊當作DMMA的一個訓練實體。最後我們重複前兩個步驟並建立二元樹的方式達成快速的DMMA。實驗表明我們的方法可以加速DMMA的運算時間,而正確率只有微量的降低。 第三個主題是在兩個場景下驗證經過的車輛是否為同一台車輛,這個驗證並非一個簡單的問題,這個問題無法藉由特徵匹配的方式來解決。這裡我們提出一個新的稀疏表達(SR),Boost K-SVD的字典表達車輛並進行車輛驗證,這種方法可以更能有效表達物件。首先我們使用粒子濾波器來挑選初始字基,然後我們令字典呈現近似正交的特性,該特性與具有有限級距特性(RIP)的矩陣相似。最後我們使用鑑別性評斷式決定字基的數量提升驗證的正確率。兩個場景下的車輛樣本對各自經過字典表達後將特徵表達向量串接成一個輸入向量,相同的車輛視為正樣本,不同的車輛視為負樣本。而驗證過程將被簡化為一個二元分類問題。所以我們所提出的Boost K-SVD字典(1)可以產生合適的稀疏表達字典。(2)更快找到初始字基。(3)提升車輛驗證的準確率。

並列摘要


This thesis investigates three research topics: object extraction, face recognition, and vehicle verification. First, we develop an object extraction method based on the model-based background subtraction. Different from previous methods, we introduce a hybrid codebook-based background subtraction method by combining the mixture of Gaussian (MOG) with the codebook (CB) method. The so-called ellipsoid CB model for modeling the dynamic background with highlight and shadow is a modified shadow/highlight removal method which can overcome the influence of illumination change. It can avoid extracting the false foreground pixels (e.g., dark background) or missing the real foreground pixels (e.g., bright foreground). Finally, we show two experiments to compare our method with the others based on the change detection benchmark dataset provided in CVPR 2011. Second, we propose an appearance-based face recognition method. Most of the appearance-based methods use multiple samples per person for training. However, normally we do not have enough training samples for each person. The appearance-based methods may not work due to insufficient training samples. Therefore, we modify the Discriminative Multi-manifold Analysis (DMMA) method and propose an acceleration method. Our fast DMMA method can be divided into three modules. First, we input the training samples of multiple persons, one person one training sample, and then use a modified of K-means method to identify the similarity of two groups people. Second, these two groups of faces are divided into non-overlapping local patches for the DMMA. Third, we repeat the previous two steps to obtain the binary tree projection matrix of fast DMMA. The accelerated DMMA shows very little accuracy deficiency. Third, verifying the same vehicle appearing in two scenes is a nontrivial problem that cannot be solved by corresponding feature matching. Here, we propose a new sparse representation (SR) for vehicle verification using the Boost K-SVD method, which offers more effective object representation. First, we use particle filtering to find the initial atom. Next, we generate the dictionary satisfying the nearly orthonormal property as similar as Restricted Isometry Property. Finally, we use a discrimination criterion to determine the number of atoms for enhanced verification accuracy. The vehicles in two views are subsequently combined and represented as a feature pair, each of which can be either a positive or negative pair. The verification is simplified as a binary classification problem. The contributions of the proposed Boost K-SVD method are (1) generating a proper SR dictionary, (2) finding the initial atom more quickly, and (3) improving verification accuracy.

並列關鍵字

Codebook Shadow Removal DMMA Fast DMMA K-SVD Boost K-SVD

參考文獻


[1] K. Kim, D. Harwood and L. Davis, "A perturbation method for evaluating background subtraction algorithm," Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, 2003.
[3] X. He, S. Yan, Y. Hu, P. Niyogi and H.-J. Zhang, "Face Recognition Using Laplacianfaces," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, pp. 328-340, 3 2005.
[4] X. Tan, S. Chen, Z. H. Zhou and F. Zhang, "Face recognition from a single image per person: A survey," Pattern Recognition, vol. 39, pp. 1725-1745, 9 2006.
[5] J. Wu and Z. H. Zhou, "Face recognition with one training image per person," Pattern Recognition Letters, vol. 23, pp. 1711-1719, 12 2002.
[6] J. Lu, Y. P. Tan and G. Wang, "Discriminative Multimanifold Analysis for Face Recognition from a Single Training Sample per Person," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, pp. 39-51, 1 2013.

延伸閱讀