透過您的圖書館登入
IP:3.145.94.251
  • 學位論文

多模態生物辨識:方法與應用

Multimodal Biometric Recognition: Methods and Applications

指導教授 : 陳文雄
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


單模態生物辨識系統有各式各樣的問題挑戰例如:雜訊資料、同類別的變異 性、受限的自由度非普遍性,欺騙攻擊和無法接受的錯誤率。有些問題可以通過 使用多模態生物特徵系統研究由多個信息源提供的驗證加以解決。以提高生物特 徵驗證的可靠性為目標,本文提出產生一個有效編碼式特徵融合,其中每個位的 方差是最大化和位元是成對不相關的兩階段的轉換。我們結合二種非接觸式的生 物特徵:一個是人臉特徵和另一個是虹膜特徵。而在特徵萃取部分,選擇全域特 徵和局部特徵兩種具互補性的特徵作結合,達到優於單一模式的結果。 實驗採用的數據集 1 為 CASIA-Distance-Iris,數據集 2 為 extended Yale B 人臉 資料庫以及 UBIRIS v1 人眼資料庫。系統架構可分為三個部分:(1)前處理模 組、(2)特徵萃取模組,(2)特徵融合模組以及(4)分類學習模組。在前處 理模組中,主要的目的為了從具雜訊的影像中偵測和切割出感興趣虹膜以及人 臉區域。而特徵萃取模組則介紹一個新的具全域特徵的實數局部二元特徵直方 圖 (RLBP histogram),與具局部特徵的銳化摺積神經網路 (sharpening convolutional neural network, SCNN)。在特徵融合步驟,則進行特徵分析並進行二階段的轉換 得到特徵融合。在最後的分類,則使用決樹(bagged detection trees)作分類器產 生。與數個不同的多模態生物辨識系統作實驗比較後,在驗證(verification)應 用上,我們的系統在等錯誤接受率可得低於 1%。而在識別(identification)應用 上,我們提出的系統可以使用 10% 的融合特徵得到低於 10% 的等錯誤率。實驗 數據中證明了特徵融合在多模態生物辨識系統要優於 sereial/parallel feature fusion 和 weighted sum rule 這些現有的多模態生物辨識系統。

並列摘要


Unimodal biometric systems have some challenges in a variety of problems such as noisy data, intra-class variations, restricted degrees of freedom, non-universality, spoof attacks, and unacceptable error rates. Some of these problems can be addressed by using multimodal biometric systems that explore the evidences presented by multiple sources of information. Aimed at improving the reliability of biometric authentication, we present a novel approach based on feature-level biometric modality fusion. This thesis proposes a two-stage transformation which produces an efficient code to feature amalgamation in which the variance of each bit is maximized and the bits are pairwise uncorrelated. We combine two contactless biometric modalities: one is face modality and another is the iris modality. For the feature extraction part, we extract both global and local features for combination which can provide complementary information, in order to excel the performance of applying single modality. Experiments in this thesis are tested on the dataset 1 (CASIA-Distance-Iris) and dataset 2 (extended Yale B face database and UBIRIS v1 eye database). The recognition system structure is divided into four parts: (i) preprocessing module, (ii) feature extraction module, (iii) fusion module, and (iv) classification and learning module. The preprocessing module detects and segments the region of interest of face and iris inside a noisy image. In the feature extraction step, we introduce a novel real local binary pattern (RLBP) histogram for global statistical features and sharpening convolutional neural network for local iris structure representation. In the feature fusion step, we use the two-stage transformation to analyze features in order to perform feature amalgamation. Finally, a classifier generated by bagged decision trees is processed to complete the classification. After comparing with several state-of-the-art multimodal biometric systems, our system achieves a equal error rate of less than 1% for verification tasks. For identification, the proposed system achieves error less than 10% using 10% feature vectors. Experimental results reveal that feature amalgamation of multimodal biometric system is better than existing feature fusion scheme, i.e., sereial/parallel feature fusion and weighted sum rule.

參考文獻


[1] A. K. Jain, K. Nandakumar, and A. Ross, “Score Normalization in Multimodal Bio- metric Systems,” Pattern Recogn., vol. 38, no. 12, pp. 2270–2285, 2005.
[2] A. K. Jain, A. Ross, and S. Prabhakar, “An Introduction to Biometric Recognition,” IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 1, pp. 4–20, 2004.
[3] A.K.Jain,A.Ross,andK.Nandakumar,Handbook of Multibiometrics (International Series on Biometrics). New York, USA: Springer-Verlag, 2006.
[4] A. K. Jain, A. A. Ross, and K. Nandakumar, Introduction to Biometrics. New York, USA: Springer US, 2011.
[5] B. Bhanu and V. Govindaraju, Multibiometrics for Human Identification. Cambridge, UK: Cambridge University Press, 2011.

延伸閱讀