透過您的圖書館登入
IP:3.14.253.221
  • 學位論文

為促成影像為基底的三維建模之互動式指導系統

Interactive Guidance and Navigation for Facilitating Image-Based 3D Modeling

指導教授 : 劉興民
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


我們提出了一個互動式指導系統,能輔助使用者取得以影像為基底的三維建模所需的照片。重建一個物體的3D模型時,使用者依照我們的指示從不同角度對物體拍攝一系列照片,而後我們會利用structure from motion演算法來算出照片視點間彼此相對的位置關係,以及物體稀疏的點雲資訊。得到足夠數量的照片之後,我們用Patch-based Multi-View Stereo (PMVS)[1]軟體來產生密集的點雲資訊;此時會一併產生出背景點,又或是因為重投影錯誤所以產生雜訊點,因此在顯示密集點雲資訊的同時,我們提供一個使用者介面讓使用者濾除掉這些我們不想要的點,最後利用密集的點雲來生成表面網格。當使用者輸入的照片無法計算出照片視點間彼此的相對位置關係時,我們的系統能找到問題出在哪裡,並且導引使用者修正問題。並且,我們會利用計算完的照片視點間的相對位置以及點雲資訊來評估並顯示哪個位置的照片數量不足需要拍攝,並導引使用者補足這些資訊。

並列摘要


We present an interactive guidance and navigation system that assists users in acquiring pictures for image based 3D modeling. To reconstruct an object’s 3D model, users follow our instruction to take a set of images for an object in different angles; we therefore calculate their relative viewing positions and spare point cloud data using structure from motion technique. After we obtain sufficient number of images, we use Patch-based Multi-View Stereo (PMVS)[1] software to generate dense point cloud data. When displaying dense point cloud, we provide user an interface to eliminate those noise data points yielded from background construction or re-projection errors. Afterwards we reconstruct surface mesh as output. Our system provides informative message for failure while calculating camera poses and helps user how to resolve those problems. Furthermore, we assess the quality of camera poses reconstruction and generated point cloud to reveal the lack of angles for captured images and guides user to remedy those information.

參考文獻


[1] Y. Furukawa and J. Ponce, “Accurate, Dense, and Robust Multi-View Stereopsism,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 8, pp. 1362 – 1376, Aug. 2010.
[4] A. Baumberg, A. Lyons, and R. Taylor, “3D S.O.M.—A Commercial Software Solution to 3D Scanning,” Graphical Models, vol. 67, no. 6, pp. 476-495, Nov. 2005.
[5] S. Hua and T. Liu, “Realistic 3D Reconstruction from Two Uncalibrated Views,” International Journal of Computer Science and Network Security, vol. 7, no. 6, pp. 178-183, June 2007.
[7] N. Snavely, S. M. Seitz, and R. Szeliski, “Modeling the World from Internet Photo Collections,” International Journal of Computer Vision, vol. 80, no. 2, pp. 189-210, Nov. 2007.
[8] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no 2, pp. 91-110, Nov. 2004.

延伸閱讀