透過您的圖書館登入
IP:18.223.172.252
  • 學位論文

以視覺為基礎之示範學習與協作機器人系統

Vision-Based Learning from Demonstration and Collaborative Robotic Systems

指導教授 : 許陳鑑
本文將於2026/07/01開放下載。若您希望在開放下載時收到通知,可將文章加入收藏

摘要


none

關鍵字

none

並列摘要


Robot arms have been widely used in many automated factories over the past decades. However, most conventional robots operate on the basis of pre-defined programs, limiting their responsiveness and adaptability to changes in the environment. When new tasks are deployed, weeks of reprogramming by robotic engineers/operators would be inevitable, with the detriment of downtime, high cost, and time consumption. To address this problem, this dissertation proposes a more intuitive way for robots to perform tasks through learning from human demonstration (LfD), based on two major components: understanding human behavior and reproducing the task by a robot. For the understanding of human behavior/intent, two approaches are presented. The first method uses a multi-action recognition carried out by an inflated 3D network (I3D) followed by a proposed statistically fragmented approach to enhance the action recognition results. The second method is a vision-based spatial-temporal action detection method to detect human actions focusing on meticulous hand movement in real time to establish an action base. For robot reproduction according to the descriptions in the action base, we integrate the sequence of actions in the action base and the key path derived by an object trajectory inductive method for motion planning to reproduce the task demonstrated by the human user. In addition to static industrial robot arms, collaborative robots (cobots) intended for human-robot interaction are playing an increasingly important role in intelligent manufacturing. Though promising for many industrial and home service applications, there are still issues to be solved for collaborative robots, including understanding human intention in a natural way, adaptability to execute tasks when the environment changes, and robot mobility to navigate around a working environment. Thus, this dissertation proposes a modularized solution for mobile collaborative robot systems, where the cobot equipped with a multi-camera localization scheme for self-localization can understand the human intention in a natural way via voice commands to execute the tasks instructed by the human operator in an unseen scenario when the environment changes. To validate the proposed approaches, comprehensive experiments are conducted and presented in this dissertation.

參考文獻


K. S. Jung, H. N. Geismar, M. Pinedo, and C. Sriskandarajah, "Throughput Optimization in Circular Dual‐Gripper Robotic Cells," Production and Operations Management, vol. 27, no. 2, pp. 285-303, 2018.
M. Foumani and R. Tavakkoli Moghaddam, "A Scalarization-Based Method for Multiple Part-Type Scheduling of Two-Machine Robotic Systems With Non-Destructive Testing Technologies," Iranian Journal of Operations Research, vol. 10, no. 1, pp. 1-17, 2019.
P.-J. Hwang, C.-C. Hsu, and W.-Y. Wang, "Development of a mimic robot—learning from demonstration incorporating object detection and multiaction recognition," IEEE Consumer Electronics Magazine, vol. 9, no. 3, pp. 79-87, 2020.
P.-J. Hwang, C.-C. Hsu, P.-Y. Chou, W.-Y. Wang, and C.-H. Lin, "Vision-Based Learning from Demonstration System for Robot Arms," Sensors, vol. 22, no. 7, p. 2678, 2022, doi: 10.3390/s22072678.
N. Berx, L. Pintelon, and W. Decré, "Psychosocial Impact of Collaborating with an Autonomous Mobile Robot: Results of an Exploratory Case Study," in Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, Boulder, CO, USA, 08 March 2021, pp. 280-282.

延伸閱讀