隨著虛擬實境與3D列印技術的普及,人們對於三維模型的需求正在增加當中。本研究致力於發展一套能夠自動導航建立周圍三維環境模型的機器人系統,機器人系統由空間資訊收集裝置與機器人移動平台所構成。機器人移動平台負責進行自動導航任務,本研究利用即時定位與建構地圖(simmultaneous localization and mapping, SLAM)所產生的地圖與位置,同時配合A*路徑規劃演算法,以達成自動路徑規劃功能。空間資訊收集裝置負責進行場景收集任務,同時收集周圍空間環境的三維資訊與色彩資訊。本研究的場景接合演算法採用color supported GICP演算法,相對於傳統點對點架構的ICP演算法,color supported GICP演算法使用面對面的架構,更能夠解決三維點雲資料是表面取樣點的問題以及非完全重疊點雲的假設,並且利用額外的色彩資訊,可以加速收斂計算。本研究結果顯示,GICP在較少對應點時可以得到和ICP演算法相近的收斂誤差。color supported GICP色彩資訊的權重比例大約在0.2左右可以達最快的收斂速度;並且相較於沒有使用顏色作為輔助資訊的GICP演算法,平均而言收斂次數快了14次,也就是收斂速度快了60.9%(14/23)。本研究根據室內外以及機器人路徑設計規劃實驗,前往室外四個場景、室內兩個場景進行實驗。因為室內外場景尺度不同的關係,室外場景間最佳的最大搜尋半徑為0.05 m;室內場景間最佳的最大搜尋半徑為0.001 m。本研究分別測試ICP、color supported ICP、GICP、color supported GICP在各個案例中的表現。發現室外場景具有許多樹叢或破碎的點雲資料會造成GICP的誤判,使其收斂速度降低;於室內場景中,結果顯示GICP收斂的速度明顯比ICP來得快。然而所有案例都顯示color supported GICP可以利用顏色輔助以達到快速收斂的效果。儘管ICP和color supported ICP收斂的誤差可以比GICP來得小,但是在室內情況下GICP和color supported GICP的收斂速度明顯較快。
With the increasing popularity of 3D printing and rapid development of virtual reality, there is a large demand for good 3D model reconstruction mothed. This research aims to develop a simultaneous localization and mapping(SLAM)-based autonomous navigation scene reconstruction robot system. The system consists of a robot moving platform and a spatial information collector. The robot moving platform performs the SLAM-based autonomous navigation. By inputing the location information and map from SLAM, the A* path planning algorithm can achieve autonomous navigation. Spatial information collector collects the 3D spatial information and the color information simultaneously. This research adopts the color supported generalized iterative closest point (color supported GICP) method as the scene registration algorithm for 3D model reconstruction. Compared to classic point-to-point frame iteractive closest point (ICP), the color supported GICP is plane-to-plane frame approach which is designed to solve the sampling point cloud mismatching problem and the violation of fully overlapped region hypothesis. With extra color information, the color supported GICP can converge faster than ones that do not integrate color information. The experimental results show that GICP are capable of converging to nearly the same error that ICP does with fewer corresponding points. The color supported GICP reaches the highest converge speed when the weight of color information is 0.2. The extra color information can help the color supported GICP converging faster than the GICP with 14 less iterations or 60.9% faster time. There are 2 indoor scenes and 4 outdoor scenes tested in this research. As indoor and outdoor scenes have different scale, the best minimum search radius of outdoor case is 0.05 m, and the best minimum search radius of indoor case is 0.001 m. This research tests the performance of the 4 algorthms (ICP, color supported ICP, GICP, and color supported GICP) in these scenes. The result shows that bushes and fractured point clouds will slow down the converge speed of GICP. But in indoor cases, GICP converges faster than ICP. All the cases show that color supported GICP converges fastest among 4 algorithms. Although ICP and color supported ICP can converge to lower error than GICP and color supported GICP, GICP and color supported GICP converge faster than ICP and color supported ICP in indoor cases.