透過您的圖書館登入
IP:18.119.107.161
  • 學位論文

兩階段監督式降維及其在人臉辨識上的應用

Two-step supervised dimension reduction with application to face recognition

指導教授 : 陳定立

摘要


在這些年中,人臉辨識是機器學習中非常重要的一個領域,其中, 特徵萃取是這個領域中人們最在乎的課題之一。主成分分析與線性判 別分析是在這個領域中兩個最重要的特徵萃取的方法,然而在實務 上,他們都有些缺點或限制。這篇論文,我們會先討論線性判別分析 的主要限制和回顧一些廣為人知去處理這個限制的演算法,然後我們 會提供了兩個新的想法去萃取特徵。一個是一種新的方法去對資料做 特徵萃取,另一個是兩階段監督式降維法。首先,我們提供了一個方 法去尋找線性判別分析的子空間,這個想法是基於偏最小平方迴歸法 與分片逆迴歸法。對於類別資料,只要適當地編碼反應變數,分片逆 迴歸法可以用來尋照線性判別子空間(Li, 2000),然而,當資料的維度 大於資料的樣本數,它依然受到小樣本問題的限制。當小樣本問題發 生時,Li(2007) 提出了偏分片逆迴歸法去估計估計式的行空間。我們 使用這個估計的行空間作為線性判別的子空間(特徵),作為我們分類 資料的依據。另外一項是我們所提出二階段監督式降維,在從資料中 尋找線性判別子空間之前,我們建議先用多重線性主成分分析尋找一 個張量子空間先降低資料的維度,再使用不同的監督式降維演算法去 尋找線性判別分析子空間。除此之外,我們也提供了一個方法去選擇 張量子空間的維度,使得用二階段監督式降維法可以有效提升分類資 料的準確度。

並列摘要


Face recognition has been viewed as an important part of the human perception system for many years and there are so many people having been putting so much effort into this field. PCA and LDA (Linear discriminant analysis) are the two of the most famous feature extraction technique for this field. However, they both suffer from some drawbacks or limitations. In this thesis, we would first discuss the primary limitation of LDA, small sample size problem, and review some well-known algorithms to overcome this problem. Then we would present two approaches for this field. One is a novel approach to find a subspace in which the data could be classified accurately. It is based on the idea of partial least square regression(Helland, 1988, 1990, 2000) and sliced inverse regression(Li, 1991). Li et al(2007) used partial least square regression to overcome the SSS problem met by sliced inverse regression. We use the column subspace spanned by the estimator they provide as the discriminative subspace. The other one is a two-step supervised dimension reduction strategy, based on the ideas of MPCA and linear discriminant analysis provided by previous people. Also, we provide a strategy to determine how many dimensions we should keep at the first step so that we could improve the recognition accuracy.

參考文獻


[1] Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on pattern analysis and machine intelligence, 19(7), 711-720.
[2] Brereton, R. G., & Lloyd, G. R. (2014). Partial least squares discriminant analysis: taking the magic away. Journal of Chemometrics, 28(4), 213-225.
[3] Chen, L. F., Liao, H. Y. M., Ko, M. T., Lin, J. C., & Yu, G. J. (2000). A new LDA-based face recognition system which can solve the small sample size problem. Pattern recognition, 33(10), 1713-1726.
[4] Cook, R. D. (2009). Regression graphics: Ideas for studying regressions through graphics (Vol. 482). John Wiley & Sons.
[5] Dai, D. Q., & Yuen, P. C. (2003). Regularized discriminant analysis and its application to face recognition. Pattern Recognition, 36(3), 845-847.

延伸閱讀