透過您的圖書館登入
IP:3.147.45.90
  • 學位論文

利用YOLO架構進行視線點估計

Appearance-based Gaze Estimation Using YOLO Architecture

指導教授 : 蘇志文

摘要


視線估計在人機介面的應用上與使用者專注力分析上是相當重要的技術。在本論文中,我們提出了基於外觀的深度學習視線估計方法。相對於過去的方法需事先擷取眼睛或臉部區域進行正規化,以輸入卷積神經網路進行後續分析。我們則直接對攝影機原始畫面進行處理,利用YOLOv3-tiny架構同時偵測影像上的眼睛區域並預測其對應的視線點位置。我們使用公開資料集MPIIGaze進行訓練及測試,並和過去的主要方法進行了比較。實驗結果顯示,我們所提方法在MPIIGaze資料庫中取得了最低的視線點誤差。

並列摘要


Gaze estimation is one of the key technologies of human-machine interface and concentration analysis. Most of existing methods detect face or eye images at first and then use CNNs to predict the gaze location from normalized input image(s). In this study, we propose a new appearance-based gaze estimation method that directly process the image sequence captured from a camera. We adopt YOLOv3-tiny to detect eyes region on the image and estimate the corresponding gaze location at the same time. A public dataset MPIIGaze is used for the test of our proposed method. The experimental results show that the proposed method achieves the lowest average error compared to the previous methods.

參考文獻


[1] C. Hennessey, B. Noureddin, and P. Lawrence. “A single camera eye-gaze tracking system with free head motion,” In Proc. ETRA, pp. 87–94, 2006.
[2] J. Chen and Q. Ji. “3d gaze estimation with a single camera without ir illumination,” In Proc. ICPR, pp. 1–4, 2008.
[3] E. D. Guestrin and M. Eizenman. “General theory of remote gaze estimation using the pupil center and corneal reflections,” IEEE Transactions on Biomedical Engineering, Vol. 53, pp. 1124–1133, 2006.
[4] Y. Sugano, Y. Matsushita, Y. Sato, and H. Koike. “An incremental learning method for unconstrained gaze estimation,” In Computer Vision–ECCV 2008, pp. 656–667. Springer, 2008
[5] X. Zhang, Y. Sugano, M. Fritz, and A. Bulling.”Appearance based gaze estimation in the wild,” In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4511–4520, 2015.

延伸閱讀