本論文提出了一個增強型的多視角點雲合併方法,可應用於建立三維物體點雲模型及三維環境點雲地圖。本論文所提出方法的貢獻分為兩部份:第一部份為濾除點雲資料周圍之誤差對應點,加強點雲對應點準確率;第二部份為提出改良型目標函數,加強點雲合併的完整性。在第一部份中,由於Kinect攝影機在擷取點雲時,影像邊界附近的點雲資料都會出現誤差,為了不讓此誤差點雲影響點雲合併的結果,本論文提出一種方法濾除在此誤差演雲上的對應點來加強合併的結果。在第二部份中,現有點雲合併演算法通常是透過目標函數所計算出的誤差數值估測合併兩點雲的轉換矩陣,因此目標函數的精確度會直接影響點雲合併完整度。現有點雲合併演算法的目標函數通常都只有計算一種方向的誤差數值,因此得到的誤差數值可能會比較不準確。為了解決此問題,本論文亦提出了一種增強型的目標函數,此目標函數同時計算出兩種方向之誤差數值,能提供精準的點雲合併結果。由實驗結果中可發現,本論文所提出的增強型點雲合併演算法在點雲合併結果的完整度及運算效能上,確實較現有演算法方法為佳。
This thesis presents an enhanced 3D RANSAC RGB-D mapping algorithm, which can be used in building 3D object model and 3D environment map via point cloud registration. The contribution of the proposed method consists of two parts. Firstly, a simple filtering method is proposed to remove inaccurate points in the input point clouds. Secondly, a novel cost function is proposed to enhance the accuracy of point cloud registration. Point clouds obtained from the RGB-D camera usually have large position error, especially around the image boundaries. The proposed filtering method is able to filter the points around these areas in order to exclude them from the registration process. On the other hand, the existing point cloud registration algorithms usually estimate the transformation matrix according to a distance-based cost function, which directly affects the performance of the registration process. To improve the registration results, a new distance-based cost function is proposed to simultaneously evaluate the forward and backward transformation errors between two point clouds. Experimental results show that the proposed point cloud registration algorithm provides better registration results with higher computational efficiency compared to two existing RGB-D mapping algorithms.