透過您的圖書館登入
IP:3.134.87.95
  • 學位論文

人類注視在人類和機器人互動中的內涵

Understanding Human Gaze as a Nonverbal Communication Cue in Human-Robot Interaction

指導教授 : 黃漢邦

摘要


人類的眼睛是一種強力的非語言交流工具:眼神不僅能夠表達興趣、展現專注、透露意圖,更能在許多面對面互動中扮演重要的角色。此外,人們的視線在社交活動中會無意識地恪守某種不成文的規則。然而當我們談到機器人時,鮮少有人機互動應用專門處理人類視線,且既使有,他們也只處理特定層面與場合。本論文致力於創造一套具有自動地感應、理解和對人類眼神做出反應功能的全面性智慧型系統,以利提升人機互動的流暢度並使機器人更加人性化。使用於該實驗室開發輪型機器人身上的系統,利用經卷積神經網路處理過的二維圖像來偵測並追蹤視線,然後使用嶄新的變形漸進式隱馬爾可夫模型來評估與該機器人交流的對象的意圖。最後,機器人按照所獲得的人類意圖來做出反應。該系統準確率高達80%以上,已經被證明可以增加人類接近時的交流成立成功率且降低會話中錯誤的發生,也被證明在提升總體人機互動之使用者體驗水平上是有效的提升。

並列摘要


Human eyes represent a strong non-verbal communication tool: eye gaze not only gives cues about interest, attention and intention of people, but also manages several kinds of social face-to-face interaction. Moreover, people unconsciously but rigorously follow specific unwritten rules when directing their gaze during social interactions. When it comes to robots, however, only few applications take into account human gaze in Human-Robot Interaction (HRI), focusing on some specific aspects or scenarios. This thesis aims to create a comprehensive intelligent system to automatically sense, understand and react to human eye gaze, in order to both improve HRI smoothness and make robots behave more human-likely. The online system, mounted on a mobile robot, detects and tracks the gaze of humans from 2D images based on a Convolutional Neural Network (CNN), it then uses a novel incremental variant of Hidden Markov Model (iCHMM) to estimate the intention of the person with whom the interaction is taking place. Finally, with this information, the robot acts accordingly to its own intention. The system has been proved to have an overall accuracy greater than 80% in correctly estimate the intention of people. The robot both increased the success rate of interaction establishment during human approaching and decreased turn taking mistakes in conversations. It was also proved to be effective in raising the overall quality of user experience during HRI.

參考文獻


1. M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean et al., “Tensorflow: a system for large-scale machine learning,” Operating Systems Design and Implementation (OSDI), vol. 16, pp. 265-283, November, 2016.
2. H. Admoni, and B. Scassellati. “Social eye gaze in human-robot interaction: a review,” Journal of Human-Robot Interaction, vol. 6, no. 1, pp. 25-63, 2017.
3. E. Bal, E. Harden, D. Lamb, A. V. Van Hecke, J. W. Denver, and S. W. Porges, “Emotion recognition in children with autism spectrum disorders: Relations to eye gaze and autonomic state,” Journal of autism and developmental disorders, vol. 40, no. 3, pp. 358-370, 2010.
4. J. D. Boucher, U. Pattacini, A. Lelong, G. Bailly, F. Elisei, S. Fagel, and J. Ventre-Dominey. “I reach faster when I see you look: gaze effects in human–human and human–robot face-to-face cooperation,” Frontiers in Neurorobotics, vol. 6, no. 3, 2012.
5. G. Bradski, and A. Kaehler, “OpenCV,” Dr. Dobb’s journal of software tools, 2000.

延伸閱讀