透過您的圖書館登入
IP:3.91.17.78
  • 學位論文

基於RANSAC-P3P平面方法,進行相機外部參數校正與利用LiDAR-相機系統達到3D場景重建

Extrinsic Calibration using LiDAR-Camera System based on RANSAC-P3P Planar Based Method and 3D Scene Reconstruction

指導教授 : 胡竹生

摘要


lidar與相機間的外部參數校正問題,其實就是測距問題(odometry),也就是說要估測兩裝置間的6dof空間轉換矩陣.另外,用3D LiDAR或是相機獲得點雲,是自走式機器人常見的視覺感知方式,我們也能將這兩者裝置得到的點雲做疊合 在本文中,我們首先探討如何估測相機的絕對姿態(absolute pose)問題,這是個已經被研究透徹的問題了,也已經有很多方法被提出來了.在現實世界,我們利用三個平面來校正相機的外部參數(在同個場景能同時被看到),或是用一個平面搭配三個相機姿態.把校正視爲最小化相機與LiDAR同時能看到的面積.RANSAC-P3P可以分別將旋轉,位移獨立開來運算,基於此方法,我們發展了一個通用的半自動校正方法.利用不同LiDAR與相機的組合,產生不同的dataset,於是就利用這些dataset來測試半自動校正方法. 我們用不同的相機與雷射組合,來測試本演算法的效能,以mm來表示位移誤差,旋轉誤差單位是度(degree),不論laser sensor原解析度爲何,都能達到同樣效果.最後,藉由ICP,我們可以做到高品質的3D場景重建(以上色的點雲表示),透過這個3D場景重建的結果,可以展現出,經過參數校正後,LiDAR與相機的資料可以漂亮地疊和.我們用了很多室內,室外環境的資料,測試點雲疊合演算法,也將3d場景重建演算法應用在KITTI dataset 關鍵字:相機,Velodyne lidar,外部參數校正,點雲圖疊合,3D重建

並列摘要


The problem of extrinsic calibration between one LiDAR and one camera resem-bles the problem of odometry, .i.e. the estimation of the 6 DoF transformation that relates both devices. In particular we focus on the registration of 3D LiDAR and camera data, which are commonly used perception sensors in mobile robotics. In this work, we first study the absolute pose problem for a calibrated camera, which was an intensively studied problem in the past and many solutions were already developed. In this work we use real-world planes as the primitives for performing the extrinsic calibration. Since we are dealing with 6 DoF transformation, we need at least 3 planes (seen on the same scene) or 1 plane seen from 3 camera poses. The calibration is solved as a 2D-3D registration problem using a minimum of three planar regions visible in both camera and LiDAR sensor. External calibration between different combinations of Velodyne LiDAR and camera sensors has been tested using generic semi-automatic calibration method using RANSAC-P3P planar based estimation by decoupling rotation and translation. We evaluate the calibration algorithm applied to the multiple camera laser com-binations and demonstrate millimeter precision in translation and rotation irrespective of the resolution for laser sensors. Finally we are determining the highly accurate 3D scene reconstruction or colored point clouds based on ICP approach, and tested it to assure a perfect alignment based on LiDAR and camera calibration parameters. Point cloud registration algorithm was exhaustively tested on real world indoor and outdoor datasets, as well as used challenging laser-camera KITTI Vision datasets for testing different environments for 3D reconstruction.

參考文獻


Besl, Paul J., and Neil D. McKay, "Method for registration of 3-D shapes," Robotics-DL tentative. International Society for Optics and Photonics, pp. 586-606, 1992.
Gaurav Pandey, "Automatic extrinsic calibration of vision and lidar by maximizing mutual information", in &th IFAC Symposium on Intelligent Autonomous Vehicles,IAV, 2010.
Unnikrishnan, Ranjith, and Martial Hebert, "Fast extrinsic calibration of a laser rangefinder to a camera”, USA, 2005 July.
Vijay John, Qian Long, Zheng Liu and Seiichi Mita, "Automatic Calibration and Registration of Lidar and Stereo Camera without Calibration Objects," in IEEE International Conference on Vehicular Electronics and Safety, Yokohama, Japan. Nov. 5-7, 2015, 2015..
Q. Zhang and R. Pless, “Extrinsic calibration of a camera and laser rangefinder," in IEEE/RSJ International Conference on intelligent Robotics and System, 2004.

延伸閱讀