透過您的圖書館登入
IP:18.117.189.7
  • 學位論文

即時RGB-D視覺姿態估測演算法之設計與實現

Design and Implementation of a Real-Time RGB-D Visual Pose Estimation Algorithm

指導教授 : 蔡奇謚

摘要


視覺姿態估測技術為機器人視覺定位系統中一個重要的核心技術,其目的為透過影像特徵點的移動資訊來估測相機本體在空間中的移動資訊。然而,此技術不但運算複雜度高,且容易因錯誤特徵匹配而影響估測準確度。本論文所提出之演算法即為解決使用RGB-D視覺感測資訊來估測相機移動時所面臨的技術問題,並提升估測相機三維旋轉角度及位移姿態之準確性及強健性。透過RGB-D影像中所偵測到的三維特徵匹配點,經由非線性最佳化方式來進行姿態估測運算,來求得相機於空間中的姿態資訊。為了提高系統運算效率,本論文亦經由Jacobian矩陣的整理來降低迭代的複雜度,藉此來加強系統整體的運算速度。本論文另加入M型估計式演算法抑制姿態估測演算法異常值的影響,以得出較穩健的結果。在實驗驗證部分,本論文使用實驗室所拍攝的數據以及Computer Vision Group網站[1]所提供的RGB-D影像的數據,比較三種現有之M型估計式之數學模型,並探討其對結果造成的影響。

並列摘要


Visual pose estimation technique, which estimates three-dimensional (3D) motion information of a camera system from changes of image features between adjacent frames, is an important core technology in vision-based robot localization systems. However, this technique usually is computationally expensive and is very sensitive to feature matching outliers. To address these technical problems, this thesis presents a RGB-D mapping algorithm that uses RGB-D visual sensing information to improve accuracy and robustness of six Degree-of-Freedom (6 DoF) motion estimation of the camera system. The proposed algorithm estimates the optimal 6 DoF posture information of the camera from the 3D feature matches between two RGB-D frames via a nonlinear optimization process. To improve the computational efficiency of the system, this thesis also derives Jacobian matrix associated with the cost function to reduce computational complexity of the optimization process, thereby enhancing overall system processing speed. Moreover, the proposed algorithm is combined with M-estimators to improve the robustness of the system against the influence of matching outliers. In the experiments, the performance of the proposed algorithm adopting three different types of M-estimators was studied by using RGB-D images corrected in our laboratory and provided on Computer Vision Group website [1].

參考文獻


[30] 黃志弘,立體視覺里程計演算法之設計與實現,淡江大學電機工程學系碩士論文(指導教授:蔡奇謚),2013。
[2] D. Nister, “An efficient solution to the five-point relative pose problem,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 6, 2004, pp. 756-770.
[3] A. Howard, “Real-time stereo visual odometry for autonomous ground vehicles,” IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008, pp. 3946-3952.
[7] S. Manoj Prakhya, L. Bingbing, L. Weisi, U. Qayyum, “Sparse depth odometry : 3D keypoint based pose estimation from dense depth data,” IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 4216-4223.
[8] J. Helge Klüssendorff, J. Hartmann, D. Forouher, E. Maehle, “Graph-based visual SLAM and visual odometry using an RGB-D camera,” 9th Workshop on Robot Motion and Control (RoMoCo), 2013, pp. 288-293.

延伸閱讀