透過您的圖書館登入
IP:3.144.151.106
  • 學位論文

校正及未校正相機地面影像序列之視覺里程計演算法

Visual Odometry Algorithms Using Ground Image Sequence from Calibrated Camera and Cooperated Un-Calibrated Cameras

指導教授 : 連豊力

摘要


對移動型機器人而言,自我運動估測及行進軌跡重建為在運行環境中自我定位的兩項重要議題。機器人定位可運用許多不同的感測器及技術,諸如:旋轉編碼器、慣性測量元件、全球定位系統、雷射測距儀及視覺感測器。相較於其他技術,視覺感測器可以提供資訊豐富的環境觀測資料,且價格低廉,是以為一項機器人定位的優良選擇。本篇論文提供兩種使用地面影像序列的影像里程計演算法。在第一種方法中,影像序列來自於一台經妥善校正的單眼攝影機。因為相機與地面間的幾何關係可經由校正結果重建,影像中的景物可因此反向投影於地面進而得到其於真實世界中的位置。此項使用校正相機的視覺里程計演算法主要包含三個步驟。第一步驟使用了特徵提取及比對技術建立兩影像間的位置對應關係。所提取的特徵將在第二步中投影至地面。最後,機器人的運動狀態將使用高斯核機率表決法加以估測。第二種視覺里程計演算法則使用兩台裝置於機器人側邊的未校正攝影機。由於攝影機的內外部參數皆假設為未知,是以影像座標與世界座標間的幾何關係因此而難以取得。為了克服此一問題,畫格影像只使用中央局部狹小區塊提取影像移動量以減少輻射變形效應,並藉此將情境簡化為雙輪里程計問題。此項使用未校正相機之視覺里程計演算法包含四個步驟。第一步驟中使用區塊比對法由影像中提取複數的運動向量。在此之後,不可靠的運動向量將藉由空間與時序特性加以偵測並剔除。接下來,向量將正規化為符合運動模型的形式。最後一步將計算機器人於畫格間的運動並藉此重建行進軌跡。兩種視覺里程計演算法皆經過電腦模擬測試及真實環境中進行的實驗。

並列摘要


For mobile robots, ego-motion estimation and trajectory reconstruction are two important issues for localizing themselves in the operational environments. Numerous kinds of sensors and techniques are used in robot localization, such as wheel encoder, IMU, GPS, LRF, and visual sensors. Comparing to other sensors, visual sensors could obtain information-rich environment data and usually with low prices, which are good options for robot localization. This thesis proposes two visual odometry methods using ground image sequences. In the first method, image sequence is captured from a well-calibrated monocular camera. Due to the geometrical relationship between ground and camera can be reconstructed by the calibration results, the image scenes could be back projected to the ground and thus get the real-world positions. The proposed visual odometry method with calibrated camera includes mainly three steps. In the first step, the positional correspondences between two consecutive images are established by feature extraction and matching. Then, the extracted features are projected onto the ground plane. Finally, the robot motion is estimated with a Gaussian kernel density voting outlier rejection scheme. For the second method, two un-calibrated cameras mounted on the lateral sides of one robot are used. The intrinsic and extrinsic parameters of cameras are assumed to be unknown and, hence, it is hard to obtain the geometric relationship between image coordinate and world coordinate. To overcome this problem, only a small part of image frame is used to extract the motion quantities for reducing the effect of radial distortion and simplifying the problem as an ordinary wheel odometry problem. The proposed method with un-calibrated cameras includes four steps. In the first step, multiple motion vectors are extracted by block matching. Then, based on the spatial and temporal distribution of motion vectors, unreliable vectors are then determined and deleted. The vectors would be normalized to the desired form to fit the motion model in the next step. Finally, the motion in each frame is calculated and the trajectory is also reconstructed. Both two methods are tested by simulations and real-environment experiments.

參考文獻


[1: Fiala & Ufkes 2011]
[2: Kitt et al. 2010]
B. Kitt, A. Geiger, and H. Lategahn, “Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme,” in Proceedings of IEEE Intelligent Vehicles Symposium, San Diego, USA, pp. 486-492, Jun. 21-24, 2010
L. M. Paz, P. Pinies, J.D. Tardos, and J. Neira, “Large-scale 6-DOF SLAM with stereo-in-hand,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 946-957, Oct. 2008
[4: Milella & Siegwart 2006]

延伸閱讀