透過您的圖書館登入
IP:18.188.241.82
  • 學位論文

基於影像特徵點之全方位視覺里程計的設計

Image Feature-Based Omni-directional Visual Odometry Design

指導教授 : 翁慶昌

摘要


本論文提出一個基於影像特徵點之全方位視覺里程計的設計與實現方式。主要有4個部分:(1)距離模型的建立,(2)環境特徵點的選取,(3)特徵點匹配(Feature Matching)與(4)視覺里程計輸出。在距離模型的建立上,本論文以有理函數插值法來取代傳統方法,此方法可以降低全方位視覺之取樣點,並得到精確至像素之全方位視覺距離模型。在環境特徵點的選取上,由於加速強健特徵點(Speed Up Robust Feature, SURF)演算法可得到大量強健特徵點之特性,本論文用SURF演算法來得到影像中每一訊框之環境特徵。在特徵點匹配上,本論文提出一個主維度優先搜尋法(Main Dimensional Priority Search)來取代傳統K維樹法(K-Dimensional Tree)。在視覺里程計輸出上,本論文以動態估測(Motion Estimation)之步驟來計算機器人之動態與相對移動數值,得到視覺里程計之輸出。從實驗結果可知,在移動與旋轉的整體效能上,本論文所提出之視覺里程計確實較輪型里程計為佳,而且視覺里程計對於地形與摩擦力並不敏感,所以本論文所提出之視覺里程計可以取代輪型里程計來作為感測器融合之感測資訊用。

並列摘要


An image feature based omni-directional visual odometry is designed and implemented in this thesis. There are four parts: (1) Distance model building, (2) Feature extraction, (3) Feature matching, and (4) Output of visual odometry. In the distance model building, a rational function interpolation method is used to build the distance model. In comparison with the traditional model regulated method, it can reduce the sampling points of model to get the precise distance between pixels of the omni-directional visual models. In the feature extraction, the SURF (Speed Up Robust Feature) algorithm is used to get environmental features of each frame, because it can detect a lot of features and these features are robust. In the feature matching, a new matching method called the main dimensional priority searching method is proposed. In comparison with the K-dimensional tree searching method, it can remove the step of building searching tree so that the matching speed will increase by 450%. In the output of visual odometry, the motion estimation method is used to calculate the relative movement of the robot motion and information. From experimental results, we can see the overall performance of the proposed visual odometry is better than that of the traditional wheeled odometry.

參考文獻


[10] 劉智誠,全方位模糊運動控制器之設計與實現,淡江大學電機工程學系碩士論文,2007。
[11] 鄧宏志,中型機器人足球系統之即時影像處理,淡江大學電機工程學系碩士論文,2006。
[5] Z.Y. Zhang, O.D. Faugeras, and N. Ayache, “Analysis of a sequence of stereo scenes containing multiple moving objects using rigidity constraints,” Computer Vision Second International Conference, pp. 177-186, Dec. 1988.
[7] A. Mallet, S. Lacroix, and L. Gallo, “Position estimation in outdoor environments using pixel tracking and stereovision,” International Conference of Robotics and Automation, Vol. 4, pp. 3519-3524, 2000.
[8] M.N. Dailey, and M. Parnichkun, “Simultaneous localization and mapping with stereo vision,” International Conference of Control, Automation, Robotics and Vision, Vol. 9, pp. 1-6, Dec. 2006.

被引用紀錄


張孜禔(2012)。二維自由視角立體影像監視系統〔碩士論文,淡江大學〕。華藝線上圖書館。https://doi.org/10.6846/TKU.2012.00294

延伸閱讀