透過您的圖書館登入
IP:18.224.53.202
  • 學位論文

基於深度模仿學習的雙輪移動機器人之無地圖式光達導航控制

Mapless Lidar Navigation Control of Two-Wheeled Mobile Robots Based on Deep Imitation Learning

指導教授 : 周永山
本文將於2026/03/04開放下載。若您希望在開放下載時收到通知,可將文章加入收藏

摘要


導航控制是自主式移動機器人的核心功能之一,其可讓機器人在工作環境中完成移動控制及迴避障礙物的任務。現有的導航控制技術大多是基於已知的環境地圖來進行,若在未知環境中或沒有當前的環境地圖情況下,則機器人需要先進行地圖建置程序後,才能開始執行導航控制任務。為了克服此限制,本論文提出一種基於深度模仿學習的雙輪移動機器人之無地圖式光達導航控制系統,其直接使用光達感測器資訊與目標點座標資訊進行數據驅動控制。透過所提出的深層卷積網路模型,即可輸出移動控制命令,且不需要環境地圖資訊及調整導航演算法的參數,即可達成在動態或未知環境中移動機器人導航控制。在訓練數據集的收集上,我們透過人工操控方式,操控雙輪移動機器人進行避障移動,並將光達感測器資訊、目標點相對座標資訊及移動控制命令記錄下來,且透過資料擴增來增加數據集的數據數量。在網路模型設計中,所提出的CNN模型包括一個光達訊號卷積模塊與一個移動預測模塊,用來提取光達資訊特徵及機器人的移動行為預測。在模型訓練中,透過我們人工給予的專家策略,將輸入的光達感測器資訊與目標點座標資訊經過端到端模仿學習映射出移動控制命令。實驗結果顯示,所提出的雙輪移動機器人之無地圖式光達導航控制系統可以在現有的環境中安全地導航,也能在未知環境中或沒有當前的環境地圖情況中達到80%的成功率導航到目標點座標,其導航效果與專家策略相近證明提出的系統可以克服此限制。

並列摘要


Navigation control is one of the core functions of autonomous mobile robots, which allows the robot to complete the task of motion control and avoiding obstacles in the working environment. Most of the existing navigation control technologies are based on known environment maps. If the robot is in an unknown environment or there is no available environment map, the robot needs to build a map before it can start to perform navigation control tasks. In order to overcome this limitation, this thesis proposes a mapless LiDAR navigation control of two-wheeled mobile robots based on deep imitation learning, which directly uses LiDAR sensor information and target point coordinate information for data-driven navigation control. The proposed deep convolutional neural network model can output motion control commands without the requirement of environment map information and the adjustment of the parameters of the navigation algorithm to achieve navigation control of the mobile robot in a dynamic or unknown environment. In the collection of the training dataset, we manipulated the two-wheeled mobile robot to avoid obstacles through manual control and recorded the information of the LiDAR sensor, the relative coordinate information of the target point, and the motion control commands. Next, we applied a data augmentation method on the recorded samples to increase the number of training samples in the dataset. In the network model design, the proposed CNN model includes a LiDAR signal convolution neural network module and a movement prediction module, which are used to extract LiDAR information features and predict the motion behavior of the robot, respectively. In the model training phase, the proposed CNN model learns how to map the input LiDAR sensor information and target point coordinate information to the motion control command through end-to-end imitation learning. Experimental results show that the proposed mapless LiDAR navigation control of two-wheeled mobile robots system can safely navigate in the known environment, and can also navigate to the target point with a success rate of 80% in an unknown environment without the environment map. These experimental results validate that the proposed mapless LiDAR navigation control system can overcome the limitation of navigation control in an unknown environment without the environment map.

參考文獻


[1] W. Yuan, Z. Li, and C. Y. Su, “RGB-D Sensor-based Visual SLAM for Localization and Navigation of Indoor Mobile Robot,” in 2016 International Conference on Advanced Robotics and Mechatronics, Macau, China, Oct. 2016, doi: 10.1109/ICARM.2016.7606899
[2] R. Liu, J. Shen, C. Chen, and J. Yang, “SLAM for Robotic Navigation by Fusing RGB-D and Inertial Data in Recurrent and Convolutional Neural Networks,” in 2019 IEEE 5th International Conference on Mechatronics System and Robots, Singapore, Sep. 2019, doi: 10.1109/ICMSR.2019.8835472
[3] X. Liu, B. Guo, and C. Meng, “A Method of Simultaneous Location and Mapping Based on RGB-D Cameras,” in 2016 14th International Conference on Control, Automation, Robotics and Vision, Phuket, Thailand, Nov. 2016, doi: 10.1109/ICARCV.2016.7838786
[4] Y. Deng, Y. Shan, Z. Gong, and L. Chen, “Large-Scale Navigation Method for Autonomous Mobile Robot Based on Fusion of GPS and Lidar SLAM,” in 2018 Chinese Automation Congress, Xi'an, China, Dec. 2018, doi: 10.1109/CAC.2018.8623646
[5] X. Hu, M. Wang, C. Qian, C. Huang, and Y. Xia; M. Song, “Lidar-based SLAM and Autonomous Navigation for Forestry Quadrotors,” in 2018 IEEE CSAA Guidance, Navigation and Control Conference, Xiamen, China, Aug. 2018, doi: 10.1109/GNCC42960.2018.9018923

延伸閱讀