三維重建廣泛地應用在醫學影像,機器人視覺,考古學等等領域。而越來越多深度相機問世,對三維重建這個議題是一大進展,加上深度資訊可以增加重建的精確性。 本論文使用Kinect相機當作擷取工具,如Figure 1所示,其好處是他能擷取出深度值,並且有色彩相機,可達到重建模型並且貼上紋理。重建時,將物體放在自動轉盤以順時針或逆時針方向旋轉,最後建構出一個具有真實顏色紋理的三維模型,設備裝置如Figure 2所示。從Kinect相機擷取出深度及彩色資訊,並且利用Ostu演算法切割出前景的點群,也就是物體的部分。接著要降低資料量,所以需將點群量化,再用Delaunay三角化方法建出三角面形成2.5D資訊,利用這些三角面做即時的ICP( Iterative Closest point )套合,一邊套合一邊將真實物體的紋理貼上模型表面,讓使用者可以觀看重建過程和即時的結果,程式介面如Figure 3所示。實驗部分使用的是絨毛玩具,將本論文的方法和Kinect RGBDemo v0.6.1(http://nicolas.burrus.name/index.php/Research/KinectRgbDemoV6)這套程式做比較。而本論文的重點則是在於即時重建出具紋理的模型。
3D modeling of real objects is an expanding issue, with applications in medical imaging, robot vision and archaeology. We present a real-time 3D reconstruction system of textured 3D models using only a single Kinect. Kinect is a RGBD sensor, which can capture RGB images along with per-pixel depth information. We take advantage of depth value and RGB information to increase accuracy and produce textures. At first, Kinect captures the RGB and range data of an object which is automatically rotated by a turntable. After getting the multi-view point clouds of object, we segment the foreground. To reduce the number of accumulated points, we discard part of original data to decrease processing time. Then we triangulate the point cloud to build 2.5D mesh in order to render the final model. Next, we registrate adjacent scans using ICP algorithm them put all of them to the same coordinate. When drawing the model, we map the texture as well. Finally, we reconstruct a textured model. Users can see a continuously-updated model when Kinect scans the object. We conduct two experiments with down toys then analyze time and accuracy of the system. The result shows that our system generates good quality colored model in an efficient way.