透過您的圖書館登入
IP:3.143.9.115
  • 學位論文

融合DNN物體辨識之移動式機器人環境建圖與定位及其在路徑規劃之應用

Vision-based Path Planning and Control of a Mobile Robot Based on DNN Object Recognition and ORB-SLAM2

指導教授 : 宋開泰

摘要


本論文提出一套基於深度神經網路 (Deep Neural Networks, DNN)物體辨識之視覺定位與建圖 (Visual Simultaneous Localization and Mapping, vSLAM)演算法,並將其應用於機器人路徑規劃與控制。本論文之設計著重在掃地機器人的應用,需要使用一個耗電少和體積小的嵌入式系統做為運算平台,因此,本論文提出以RGB-D攝影機偵測環境,並開發輕量化ORB-SLAM2的系統架構,使其能夠在CPU的運算平台上執行。本論文透過Streaming SIMD Extensions, (SSE)技術達成減少特徵點擷取演算法的運算時間,以及融合DNN物體辨識技術減少特徵點匹配之運算時間,達成執行於嵌入式系統上的目標。透過ORB-SLAM2演算法建立2D柵格地圖,使掃地機器人能基於環境地圖規劃移動路徑。本論文設計出路徑追蹤控制器讓機器人導航於規劃之軌跡。論文以iRobot Create機器人,實驗驗證所發展的方法。實驗結果顯示融合DNN物體辨識技術之vSLAM演算法在嵌入式系統上執行之特徵點匹時間為傳統ORB-SLAM2的45.16%。使用融合DNN物體辨識技術之vSLAM,執行機器人方形定位實驗中,機器人行駛距離19.8m,其定位精度在20mm誤差範圍內。

並列摘要


This thesis proposes a path planning and control of a mobile robot based on Deep Neural Networks (DNN) object recognition and ORB-SLAM2. For the application of robotic vacuum cleaner, an embedded system with small size and low power consumption is required. An RGB-D camera is utilized to realize ORB-SLAM2 algorithm. The advantage is that it can be executed on a CPU-only embedded computing platform. This thesis proposes a ORB feature extraction algorithm with the SSE technology to reduce the computation time. Further, a feature matching method based on DNN object recognition is developed to reduce the computation time. By creating a 2D grid map based on the feature map from ORB-SLAM2 algorithm, the path planner can plan a zig zag path for a robot vacuum cleaner to complete the task more efficiently. This thesis developed the path following controller for the robot to navigate on the desired trajectory. Several interesting experiments validate the proposed method of vision-based path planning and control of a iRobot create mobile robot. The DNN-based object recognition algorithm was implemented in a Movidius Neural Compute Stick. It is shown that the computation time for feature matching is reduced to 45.16% of the original ORB-SLAM2. Positioning experimentation shows that the accuracy of robot localization for a square trajectory about 19.8m path is within 20mm error.

並列關鍵字

Deep vSLAM SLAM Path Planning

參考文獻


[1] Cesar Cadena, Luca Carlone, Henry Carrillo, Yasir Latif, Davide Scaramuzza, Jos´e Neira, Ian Reid and John J. Leonard, “Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age?,” IEEE transactions on robotics, vol. 32, no. 6, 2016, pp.1309-1332.
[2] P. de la Puente, M. Bajones, P. Einramhof, D. Wolf, D. Fischinger and M. Vincze, “RGB-D Sensor Setup for Multiple Tasks of Home Robots and Experimental Results,” in Proc. 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 14-18, Chicago, USA, 2014, pp 2587-2594.
[3] A. Davision, I. Reid, N. Molton and O. Stasse, “Monoslam: Real-time single camera SLAM,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. 29, no.6, 2007, pp.1052-1067.
[4] 劉建宏,“基於擴展卡門濾波同時定位與地圖建立之地圖接合研究”,碩士論文,國立交通大學,2012
[5] H. Strasdat, J. M. M. Montiel and A. J. Davison, “Visual SLAM: Why filter”, Image Vision Comput., vol. 30, no. 2, 2012, pp. 65–77.

延伸閱讀