透過您的圖書館登入
IP:3.16.90.182
  • 學位論文

基於深度學習的機器人在前跟人行為並應用於自動載具

Deep Learning Based Human Front Following and Its Application to Autonomous Vehicle

指導教授 : 吳炳飛

摘要


在未來社會,人與機器人的互動與合作是非常重要的行為。機器人的自動化能大量減輕勞力者的負擔,使勞力者能更專注在其所需要專注的事情上,如:護士、年長照顧者。本論文觀察現代人,提出一個以往文獻較少記載的自動化模式: 主動式機器人在前的跟人行為。機器人在前的跟人行為,主要由兩個部分組成,第一個部分為朝向角(Human Orientation)偵測,攝影機偵測伴隨者,藉由深度學習(Deep Learning)的訓練,在深度影像中找出代表人體朝向角的特徵係數,將特徵係數放到我們的模組中,即能分類出人體朝向角的類別,並透過分類出來的類別,以人體空間座標估算人體朝向角,得到確切的人體朝向角。第二部分為模糊Q-learning邏輯控制(Fuzzy Q-learning Logic Control),我們將模糊控制器的輸入加上Q-learning,讓機器人能即時透過環境資訊隨時調整路線。此外透過機器人主動偵測伴隨者朝向角以及伴隨者的相對位置後,伴隨者只要轉動身體,就能指示機器人的轉彎方向,而不需要額外的側向位移,並結合雷達獲取相關地形資訊,即可做出機器人在前跟人並且避開障礙物的伴隨行為。

並列摘要


In future society, affective interactions and cooperation between humans and robots become extremely important. Because of the large amount of labor saved by the robotic automation, caregivers and nurses can focus on those they are really care about. This paper proposes a novel strategy for automatic human-following control: Front-Following, which means robots execute human following in front of accompanist, rarely described in the literature. The Front-Following process is divided into two parts. First part: Human Orientation Detection. The human body orientation angle is obtained through cameras and Deep Learning methods. Thus, the Deep Learning model is trained by the features of human body orientation captured from cameras. This Deep learning model will then be used in human orientation classification. The exact human orientation is estimated from human body coordinate and classification results. Second part: Fuzzy Q-learning Logic Control. To let robots tune their route immediately by use of environment information, we add Q-learning into the inputs of Fuzzy Logic Control. In addition, robots could detect orientation and relative position about accompanists spontaneously. In order words, what accompanists only to do is spinning their bodies. No additional displacement is required as accompanists instruct robots to perform a turn. Otherwise, combining with the LRF information from the environment, robots could execute front following and obstacle avoidance simultaneously without collision.

參考文獻


[1] E. J. Jung, J. H. Lee, B. J. Yi, J. Park, S. i. Yuta, and S. T. Noh, “Development of a laser-range-finder-based human tracking and control algorithm for a marathoner service robot.” IEEE/ASME transactions on mechatronics, vol. 19, no. 6, pp. 1963-1976, 2014.
[3] R. W. Deng, Y. H Wang, C. J. Lin, and T. H. Li, “Implementation of human following mission by using fuzzy head motion control and Q-learning wheel motion control for home service robot.” Proc. of International Conference on Fuzzy Theory and Its Applications, Taipei, Taiwan, pp. 55-60, 2013.
[4] E. J. Jung, and B. J. Yi, "Control algorithms for a mobile robot tracking a human in front." 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012.
[6] Cifuentes, C. A., Frizera, A., Carelli, R., and Bastos, T. "Human–robot interaction based on wearable IMU sensor and laser range finder." Robotics and Autonomous Systems, 62.10, pp. 1425-1439, 2014.
[7] J. S. Hu, J. J. Wang, and D. M. Ho, "Design of sensing system and anticipative behavior for human following of mobile robots." Industrial Electronics, IEEE Transactions on 61.4, pp. 1916-1927, 2014.

延伸閱讀