透過您的圖書館登入
IP:18.217.208.72
  • 學位論文

多感測器融合三維同步定位與地圖建構自主移動工業機器人於智慧服務之應用

Autonomous Mobile Industrial Robot with Multi-Sensor Fusion Based Simultaneous Localization and Mapping for Intelligent Service Applications

指導教授 : 羅仁權
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


近年的研究和應用顯示,自主移動工業機器人(Autonomous Mobile Industrial Robot, AMIR)仍是一個受歡迎的機器人平台,因為它的多功能性,能夠藉由可移動的車體、靈活的手臂、夾爪上的深度攝影機(RGB-D Camera),完成最基本也最重要的自主移動夾取(Autonomous Mobile Manipulation)任務,在一系列的智慧服務中這是不可缺少的一項重要功能。而僅具二維定位建圖與導航功能的自主移動平台機器人(Autonomous Mobile Robot),在搭載具有六個自由度的工業手臂(Industrial Robot)後,已不再足以應付空間中的避障需求,勢必得加入三維的感知能力才可以達到避免碰撞的運動規劃,而本篇論文將使用三維同步定位與地圖構建(Simultaneous Localization and Mapping, SLAM)技術來實現這個目標。 為此我們特別設計了一個符合自主移動工業機器人使用的三維同步定位與地圖建構系統,融合機器人上的傳感器數據,例如二維光達(2D LiDAR)、慣性測量單位(IMU)、深度攝影機(RGB-D Camera)、里程計資訊(Odometric Data)和手臂運動學資訊(Kinematic Data),計算最佳化的位姿軌跡、二維和三維的格點佔據式地圖(Occupancy Grid Map)。並且整合這套三維同步定位與地圖建構系統和運動規劃系統,使機器人得以在免於碰撞的情況下完成自主移動夾取。 在這篇論文中我們使用了我們台大智慧機器人及自動化實驗室自己設計開發的一台自主移動工業機器人作為搭載此系統的平台。我們的三維同步定位與地圖建構系統是基於二維的 Cartographer SLAM 上再作延伸。根據 Cartographer 所維持的位姿陣列,通過手臂運動學的資訊將深度相機所註冊的點雲經過濾波後轉換回在地的子地圖坐標系,進行迭代最近點(Iterative Closest Point)將累積的點雲疊合成子地圖。而為了達到全局一致的地圖,我們應用了相似檢測,計算質心距離相近的子地圖的相應關係,作為新的限制條件加入位姿圖中(Pose Graph),和原本 Cartographer 提供的限制條件一起進行基於 Ceres 函式庫的非線性優化(Optimization)。最後再將各個子地圖根據優化後的位姿重新組合成全局的二維和三維的格點佔據式地圖。 在實驗中,首先,我們將我們提出的三維同步定位與地圖建構系統做剝離比較來觀察各個步驟對結果造成的增益。第二,與其他可用於自主移動工業機器人的三維同步定位與地圖建構方法在我們自己的錄製的數據集和公開的同步定位與地圖建構數據集上做比較,實驗結果顯示,我們提出的方法在量化和質化的表現都比其他方法的更好。第三,我們展示了基於我們的三維同步定位與地圖建構系統的自主移動夾取解決方案,並與沒有使用我們的同步定位與三維地圖建構系統的解決方案做實驗相比較,顯示了我們的系統如何成功運行並輔助增強自主移動夾取任務,使其有更全面的避障效果與更方便的設定方式。最後,我們也成功用我們的自主移動工業機器人演示了整個智慧服務的情境,透過我們的三維同步定位與地圖建構系統結合自主移動夾取與避障,完成多工作站的物料配給,一氣呵成。

並列摘要


Recent studies and applications have shown that autonomous mobile industrial robots, in short, mobile manipulators remain a popular robotics platform because of its multifunctional ability. With the eye-in-hand RGB-D camera setting, it is able to implement ubiquitous and adaptive mobile manipulation in a bunch of intelligent services. In order to perform the autonomous mobile manipulation, the 3D perception of an environment is necessary for the motion planning algorithm to compute collision-free trajectory. By using 3D simultaneous localization and mapping (SLAM) techniques, we are possible to realize it. We specifically design a 3D SLAM architecture for mobile manipulators. The system fuses data from sensors on the robot such as 2D laser scan, inertial measurement unit (IMU) data, RGB-D camera data, odometric data and kinematic data to compute an optimized pose, 2D occupancy grid map and 3D occupancy grid map built upon octree. Moreover, we integrate our system with motion planning system, making the robot avoid obstacles autonomously during mobile manipulation. The robotic platform of our system is an Autonomous Mobile Industrial Robot (AMIR) which combining autonomous mobile robot (AMR) and industrial robot (IR), designed and developed by our NTU intelligent robotics and automation lab. The SLAM architecture is based on 2D Cartographer SLAM and extended to have a 3D capability. According to the optimized footprint poses maintained by Cartographer pose graph, we use the kinematic data to transform filtered RGB-D point clouds, building up local submaps which consist of several point clouds by performing the Iterative Closest Point (ICP) technique. To achieve global consistency, the approximate detection is applied to insert a new constraint between two close 3D submap into the pose graph. The non-linear optimization is then conducted on the pose graph with constraints from 3D submaps and Cartographer using Ceres library. Finally, we compose all the submaps by transforming them into world coordinate according to the new optimized poses, getting global 2D and 3D occupancy grid maps. We perform the experiments, firstly with an ablation comparison on our architecture, showing how each mechanism improves the performance of the result. Secondly compared with other state-of-art methods with the public SLAM dataset as well as using dataset collected in our own experiment. The result shows that our approach can generate more accurate and more robust maps than other available methods implemented to a mobile manipulator. Third, we demonstrate an autonomous mobile manipulation solution based on our SLAM system, comparing to other available methods without it. The result shows that our system successfully works and enhances motion planning with obstacle avoidance in more comprehensive and convenience way. Last but not the least, we successfully demonstrate an intelligent service scenario of multi-station robotic delivery with our approach from SLAM to autonomous mobile manipulation.

參考文獻


A. Sagitov, et al. "ARTag, AprilTag and CALTag Fiducial Marker Systems: Comparison in a Presence of Partial Marker Occlusion and Rotation." ICINCO (2). 2017.
K. Rainio and A. Boyer. "ALVAR–A Library for Virtual and Augmented Reality." VTT Augmented Reality Team. [online]. Available: http://virtual.vtt.fi/virtual/ proj2/multimedia/index.html
J. Wang and O. Edwin. "AprilTag 2: Efficient and robust fiducial detection." 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2016.
C. Cadena, et al. "Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age." IEEE Transactions on robotics 32.6 (2016): 1309-1332.
C. Stachniss, Robot Mapping WS 2013/14, [online]. Available: http://ais. informatik.uni-freiburg.de/teaching/ws13/mapping

延伸閱讀