透過您的圖書館登入
IP:18.222.80.122
  • 學位論文

機器人的人類群體行為與人類移動認知地圖建構

Construction of Behavior Cognitive Map with Group Activities and Human Movement for Robots

指導教授 : 黃漢邦
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


隨著科學及技術的進步,機器人的發展與應用已越來越廣泛。為了使機器人能更好地融入到我們的日常生活中,機器人必須理解環境並對抽象的規範有所認知。透過建立環境模型來瞭解人類行為與空間的關係是機器人認知中相當重要的一部分。其中,人與人的互動行為與人類的移動行為與環境的關係十分的密切。本論文致力於建構群體行為與人類移動的環境模型。針對群體行為的環境模型,本論文提出一個透過向量來描述人與人之間的關係的深度學習網路來進行群體行為的辨識並採用一個可以動態修正地圖的數學模型來適應環境中的變化。針對人類移動行為的環境模型,我們使用馬可夫鍊來描述人的各個位置上移動的意圖。藉由這兩個環境模型,機器人能理解各個位置適合做哪些行為,並對做出禁止行為的人給予適當的反應。

並列摘要


With the progress of science and technology, the development and application of ro-bots has become more extensive. For sharing the same environment and cooperating with human, robots need to understand the external environment and the abstract social rules of human society. Building the environment model to understand the relation between hu-man behavior and the environment is extremely important in robot perception. Human–human interaction behavior and human movement behavior are closely related to the en-vironment. As a result, this thesis is devoted to constructing an environment model com-posed of human–human interaction information and human movement information. For the model of human–human interaction, we propose a deep learning network, which uses a vector to describe the relation between two people, to recognize group activities and a mathematic model, which can dynamically adjust the parameters to adapt to the change in the environment. For the model of human movement, we use a Markov chain to describe the tendency of movement. With these two environment models, the robot can determine whether its behavior is appropriate or whether its behavior is prohibited at the location. Thus, the robot selects an appropriate strategy to interact with people.

參考文獻


[1] D. Arbuckle, A. Howard, and M. Mataric, "Temporal Occupancy Grids: A Method for Classifying the Spatio-Temporal Properties of the Environment," Proceeding of IEEE International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, Vol. 1, pp. 409-414, Sept. 30- Oct. 4, 2002.
[2] S. M. Azar, M. G. Atigh, A. Nickabadi, and A. Alahi, "Convolutional Relational Machine for Group Activity Recognition," Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, pp. 7892-7901, June 15-20, 2019.
[3] T. Bagautdinov, A. Alahi, F. Fleuret, P. Fua, and S. Savarese, "Social Scene Understanding: End-to-End Multi-Person Action Localization and Collective Activity Recognition," Proceeding of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, pp. 4315-4324, July 21-26, 2017.
[4] W. Choi and S. Savarese, "Understanding Collective Activitiesof People from Videos," IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 36, No. 6, pp. 1242-1257, June 2014
[5] W. Choi and S. Savarese, "A Unified Framework for Multi-Target Tracking and Collective Activity Recognition," Proceding of European Conference on Computer Vision, Florence, Italy, pp. 215-230, Oct. 7-13, 2012.

延伸閱讀