近十年來,人臉辨識系統的開發一直是一個困難的挑戰。於二維的人臉辨識技術中,光影變化與人臉姿勢的不確定性通常會降低辨識準確率,隨著深度攝影機的發明,利用具有座標數值的三維資訊可以降低人臉辨識的誤判率。此篇論文利用該三維特徵資訊來實作人臉辨識,並將整個系統架構分為三個階段來論述。首先,我們利用三維點雲的開放函式庫來取得人臉深度資訊,並將取得的人臉深度資訊去做法向量計算與具代表性的表面曲度計算,以便換算成易於分析的特徵值。接著我們將換算完的特徵值輸入到我們的深度信心類神經網路來做訓練,求出具有辨識率的模型。最後我們利用訓練好的模型參數與深度信心類神經網路的運算,來辨別新進入畫面的人是否為我們所求的目標。透過實驗結果,我們系統的辨識準確率達到95%。
Developing face recognition systems has been a challenge for decades. The variation in illumination and head pose may decrease the accuracy of two-dimensional face recognition. With the invention of a depth map sensor, more three-dimensional volume data can be processed to mitigate the problem associated with face verification. This paper describes our three-dimensional face verification approach in three phases. First, point cloud library is applied to estimate normal vectors and principal curvatures of every point on a human face point cloud acquired from three-dimensional depth sensor. Next, we adopt deep belief networks to train the identification model using estimated features. Then, face verification is accomplished by using the pre-trained deep belief networks to justify if new incoming face point cloud feature is the one we specified. The experimental results demonstrate that the proposed system performs up to 95% verification accuracy.