透過您的圖書館登入
IP:3.139.238.76
  • 學位論文

智慧眼鏡眼神目標選取技術

EyeLasso: Real-World Object Selection using Gaze-based Gestures

指導教授 : 陳彥仰

摘要


無資料

關鍵字

眼神互動 眼神追蹤

並列摘要


Selecting objects in real-world settings is currently difficult to automate and requires significant manual effort. We propose a gaze-based gesture approach using wearable eye trackers. However, achieving effective gaze-based selection of real-world object has several challenges, such as the issue of Double Role and Midas touch. Prior studies required explicit manual activation/deactivation to confirm the user’s intention, which impede fast and continuous interaction. We present EyeLasso - a fast gaze-based selection technique that allows users to select the target they see with only a single Lasso gaze gesture, without requiring additional manual input. EyeLasso uses Random Forest learning for gesture detection and GrabCut using OpenCV for improving the accuracy of target selection. Results from our 6-user experiments and 10-object-selection tasks in both gesture detection and item selection show that EyeLasso selected the target with 90% accuracy, without requiring manual input (0.17 times unintended selections in two minutes, 10% false negative rate).

參考文獻


[2] R. Bednarik, H. Vrzakova, and M. Hradis. What do you want to do next: a novel approach for intent prediction in gaze-based interaction. In Proceedings of the symposium on eye tracking research and applications, pages 83–90. ACM, 2012.
[5] H. Istance, R. Bates, A. Hyrskykari, and S. Vickers. Snap clutch, a moded approach to solving the midas touch problem. In Proceedings of the 2008 symposium on Eye
[6] H. Istance, A. Hyrskykari, L. Immonen, S. Mansikkamaa, and S. Vickers. Designing gaze gestures for gaming: an investigation of performance. In Proceedings of the
[7] R. J. Jacob. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Transactions on Information Systems (TOIS),
9(2):152–169, 1991.

延伸閱讀