透過您的圖書館登入
IP:3.145.83.150
  • 學位論文

融合視覺影像與雷射測距感測資訊強化室內定位系統於智慧服務機器人之應用

Enhanced Indoor Localization System Using Complementary Visual and Laser Range Sensory Fusion for Intelligent Service Robotics

指導教授 : 羅仁權

摘要


對於全自主移動機器人而言,定位可以說是最重要的功能之一。如果機器人沒有正確的認知到自己在環境中哪個位置,那麼在執行任務上就可能發生嚴重的意外。機器人在移動時,如果定位不夠精準,很有可能最後會導航到錯誤的位置。總體而言,定位對於現今的全自主移動機器人來說是一個重要且必需的能力。隨著機器人產業的蓬勃發展,如何將機器人市場擴展到居家的環境裡,良好又穩定的定位能力會是一個重要的指標。 室內定位的方法主要以雷射或視覺為主。一般來說,雷射定位可以達到很高的定位結果,歸功於雷射本身擁有的高解析度。然而,當機器人在室內環境中,由於室內環境結構較為類似、簡單,缺少能夠方便辨識的幾何架構。例如室內的走廊兩側的牆壁或者結構相似的辦公室,這些對於只能獲取二維結構訊息的雷射來說,訊息的重複性太高,所以很難去透過和地圖匹配來做到有效的定位結果。這會產生一個問題,機器人如果在移動中失去定位的狀況下,要自動完成重新定位上會有相當的難度。相對而言,視覺定位在精準度雖然無法與雷射定位相抗衡,無法達到很高的定位效果,但由於圖片所擁有特徵點較多,所以在匹配的選擇上也比較容易。在重新定位的問題上,視覺可以透過豐富特徵點來執行高維度的匹配,也就是說透過視覺定位,移動型機器人可以很快、且有效的重新掌握自己在環境中的大概位置。儘管如此,傳統方式的視覺定位主要是擷取圖片上的特徵點來讓演算法執行,而圖片的特徵點很容易被外在的環境變因像是光暗度或者模糊等所影響,所以在本研究中,我們採用深度學習的方法來實現視覺定位,透過深度學習的技術來達到較穩定的定位結果。 在本篇研究中,我們提出了一個可以偵測機器人丟失位置且自動完成重新定位的系統,這個系統透過了視覺定位的方法,來輔助傳統的雷射定位。實驗上,我們在100平方公尺的室內環境測試系統,並且對環境收集了1000張左右的圖片當作視覺定位模型的訓練資料。實驗結果顯示,透過我們的系統在定位的成功率以及所耗費的時間上都勝過了傳統雷射定位方法。

並列摘要


Localization is the essential capability for the enhanced mobility of mobile robots in the indoor environment. If the localization is not precise, the mobile robot may lead to an unexpected collision or even get lost. Overall, localization is the key factor for the performance of an autonomous mobile robot. With the development of the robot industry, how to expand it to the home environment, a great and stable localization ability is an important index. In general, there are mainly two types of approaches to perform localization: laser range finder and camera vision approaches. Laser range finder methods have the high precision due to the accurate measurement of laser range information. In most cases, the laser-based approaches perform well and it can relocalize well when the robot loses its pose. However, in the indoor environment, it doesn’t have the sufficient geometric landmarks. The structures such as corridor and offices are too simple and similar in the form of 2D laser patterns. It leads the difficulties for the laser range finder algorithm to relocalize. On the other hand, camera vision methods are less accurate but have the rich features which can help the autonomous mobile robot relocalize when losing pose. With the usage of camera vision, the autonomous mobile robot can be more easily to relocalize in the indoor environment. The main problems of traditional camera vision methods are that they are sensitive and fragile to environmental changes such as lighting. To address this problem, we use the state of the art deep learning model to conduct visual localization more robustly. In this research work, we propose a novel autonomous mobile robot relocalization system which can detect the losing pose problems of the mobile robot automatically and relocalize with usage of the complementary vision information. We test our system in approximate 100 square meters indoor environment and create a 1000 images dataset as training data for the training of the visual localization. In the experimental results, we demonstrate that our relocalization system has the higher success rate of relocalization and has the smaller convergence time compared with pure laser localization only.

參考文獻


S. Särkkä. Bayesian filtering and smoothing. Vol. 3. Cambridge University Press,2013.
G. Welch and G. Bishop. "An introduction to the Kalman filter." (1995).
S. Thrun, et al. "Robust Monte Carlo localization for mobile robots." Artificial intelligence 128.1-2 (2001): 99-141.
R. C. Luo and W. Shih, "Topological Map Generation for Intrinsic Visual Navigation of an Intelligent Service Robot," 2019 IEEE International Conference on Consumer Electronics (ICCE), pp. 1-6, 2019.
R. C. Luo and T. J. Hsiao, "Kidnapping and Re-Localizing Solutions for Autonomous Service Robotics," IECON 2018 - 44th Annual Conference of the IEEE Industrial Electronics Society, pp. 2552-2557, 2018.

延伸閱讀