在這篇論文中提出一種以視覺選取機制為基礎之上實作的眼動儀自動校準機制,使操作者能夠在不受打斷的情況下進行重新校準。此方法是藉由使用者進行眼動儀操作時產生的視覺資訊來對系統內的迴歸模型進行更新,進而讓使用者能夠保持良好的校準品質以進行長時間的眼動操作。這個方法中所用作的基礎的視覺選取方法是以「多重確認」的形式進行,透過兩階段的選取步驟以防止誤觸非使用者所想的目標。使用者必須將視線停留在欲選取物件上並在第二步驟中把視線移至確認用目標上以完成一次選取,我們將第二步驟中的確認用目標作為蒐集視覺資料的區塊,每當使用者進行確認步驟時就能觸發自動校準機制。此研究除提出將重新校準融入一般操作的概念外,也提出一種將眼動儀的迴歸模型依據資料蒐集區域劃分為不同部分,並將模型更改為權重迴歸模型以均衡使用者在不同位置上的視覺落差。我們透過實作概念性的操作介面讓受測者使用,在這之上分析使用者在加入自動校準系統後的使用情境。
In this paper, we present a novel approach of recalibrating head-mounted eye-tracking systems in runtime without deliberating user's working tasks. By gathering the data points from the user's working tasks (e.g., selecting a target), we may update the regression model used for mapping the device's positions to the gaze points of the eye-tracker with the data collected. This approach eliminates the need of an explicit recalibration process. While maintaining the stability of the regression model, the user may need to continue his/her tasks without being interrupted if the mapping quality dropped. Our method is built on a known dwell-based gaze selection method "Multiple Confirm", which highlights a two-step validation on selecting targets. However, the user needs to dwell on the target and another confirmation target to complete the selection procedure, this may stop the user's manipulation. Based on this, we modify the confirmation target and collect nearby gaze data, which are used for replacing the outdated data in the calibration. Then, a robust model is developed to update the regression model's different sectors in a sequential order. To evaluate the proposed method, we compared it against the usage of eye-trackers without applying runtime recalibration. The result indicates that the accuracy bias of the model can be controlled within a certain degree of visual angle even in a long-term usage.