透過您的圖書館登入
IP:216.73.216.60
  • 學位論文

人物識別與身份追縱技術應用於追隨機器人

Person Identification and Tracking under Occultation for Accompanyist Robots

指導教授 : 曾煜棋 林靖茹

摘要


人物識別及身份追縱(PIT)在現今電腦視覺及機器人應用領域中,是一項關鍵的研究主題。目前已經有許多科技提出相關研究並實做出解決方法,例如無線射頻辨識、人臉辨識、指紋辨識、虹膜辨識等。然而這些方法受制於其在實際環境上的限制(如燈光或者屏障),或者對特定裝置的近距離接觸需求。因此,它們的辨識率高度地依賴使用情境所影響。在此研究,我們探討跟隨機器人,其能提供追隨與引導服務。這一類型的機器人需要能辨識它眼前中的各個人物之身份,同時鎖定目標使用者並保持預先定義好之距離,達到身份追縱之能力。我們更進一步地考量富有挑戰性的情境,鎖定目標之使用者偶爾會被障礙物所遮蔽到。為了實現穩定的人物識別及身份追縱系統(PIT),我們提出EOY (Eye On You),一套資料融合技術,其整合兩種常見的感測器,分別為深度攝影機以及穿戴式感測裝置。由於這兩種感測器所產生出來的資料共享於相似度高的特徵,因此我們能夠融合其資料達到人物識別及身份追縱之技術。我們整合整體系統架構至一機器人平台,並展現其能夠追隨鎖定目標之使用者,即使在生物特徵未能完整的被深度攝影機取得的時候。實際上的議題挑戰,如時間同步問題以及地球坐標系統校正等,我們亦著手解決。實驗評估顯示我們的系統(PIT)具備高度且穩定的人物識別及身份追縱正確率。

並列摘要


Person identification and tracking (PIT) is an essential issue in computer vision and robotic applications. It has long been studied and achieved by technologies such as RFID and face/fingerprint/iris recognition. These approaches, however, have their limitations due to environmental constraints (such as lighting and obstacles) or require close contact to specific devices. Therefore, their recognition rates highly depend on use scenarios. In this work, we consider an accompanyist robot, which provides follow-me or guide-me services. Such robots need to distinguish peoples' profiles in front of them and stay at pre-defined distances from target persons. We study a more challenging scenario where the targeted persons may be under occultation from time to time. To enable robust PIT, we present EOY (Eye On You), a data fusion technique that integrates two types of sensors, RGB-D camera and wearable inertial sensor. Since the data generated by these sensors share common features, we are able to fuse them to conduct PIT. We integrate our scheme on a robotic platform and show that it can track a target person even when no biological feature is captured by RGB-D camera. Practical issues, such as time synchronization and coordinate calibration, are addressed. Our experimental evaluation shows its effective and reliable recognition rate along with following rate.

參考文獻


[1] D. Miorandi, S. Sicari, F. De Pellegrini, and I. Chlamtac, “Internet of things: Vision, applications and research challenges,” Ad Hoc Networks, vol. 10, no. 7, pp. 1497–1516, 2012.
[2] M. Rofouei, A. Wilson, A. Brush, and S. Tansley, “Your phone or mine? fusing body, touch and device sensing for multi-user device-display interaction,” in Proc. ACM CHI, 2012.
[3] A. Steinfeld, T. Fong, D. Kaber, M. Lewis, J. Scholtz, A. Schultz, and M. Goodrich, “Common metrics for human-robot interaction,” in Proc. ACM HRI, 2006.
[4] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recognition: A literature survey,” ACM Computing Surveys, vol. 35, no. 4, pp. 399–458, 2003.
[5] P. J. Grother, G. W. Quinn, and P. J. Phillips, “Report on the evaluation of 2D still-image face recognition algorithms,” NIST Interagency Report, vol. 7709, p. 106, 2010.

延伸閱讀