透過您的圖書館登入
IP:18.226.87.83
  • 學位論文

以三維背景特徵模型為基礎之增添式實境

Three Dimensional Background Feature Model for Augmented Reality

指導教授 : 洪一平
共同指導教授 : 陳祝嵩(Chu-Song Chen)

摘要


在本論文中,我們提出了一個三維背景特徵模型(3DBFM)來記錄場景中顯著的特徵點的三維位置及特徵點的外觀。藉由建立影像中出現的特徵點與三維背景模型之對應關係,我們使用以ICP為基礎之相機參數估算演算法計算出對應的相機外在參數,並使用這些相機參數於增添式實境上,讓虛擬物品正確地呈現在影像之中。另外,在我們持續拍攝影片中,三維背景模型也會不斷的更新其儲存資料,包含三維位置及特徵點外觀。對於其他沒對應到三維背景模型之影像中的特徵點,我們會觀察它們一段時間,計算出其三維位置並加入三維背景模型,故三維背景模型會漸漸增加其作用範圍。我們的方法好處在於在瞬間光線變化及部分遮蔽的狀況下,依然可以持續運行不受干擾。甚至在於環境中全部特徵點被遮蔽時,只要一重新觀察到三維背景模型所記錄的特徵點,就可以立刻回復原先正常的狀態。這樣的概念和特性,使得我們的方法更能應用增添式實境系統。

並列摘要


In this thesis, we present a descriptor based approach for augmented reality by using a 3D background feature model (3DBFM). 3DBFM contains 3D positions of scene objects and their image appearance distributions. To describe image appearances, we use a new descriptor, contrast context histogram (CCH), which has been shown high matching accuracies but less computation time. By matching the image features with the features in the 3DBFM, we can get 3D-2D correspondences. Then, we adopt iterated closet point (ICP) based algorithm to estimate the camera pose. According to the camera pose, new scene points, which are not in the 3DBFM, can be learned. The experiments showed that our approach can match features under significant changes of illumination and scales. Even long term occlusion occurs; the system can still work after matching feature without any additional penalty.

參考文獻


[3] L. Rosenblum and M. Macedonia, “Tangible Augmented Interfaces for Structural Molecular Biology,” IEEE Computer Graphics and Applications, vol. 25, no. 2, pp. 13-17, 2005.
[4] H. Tamura, H. Yamamoto, and A. Katayama, “Mixed Reality: Future Dreams Seen at the Border between Real and Virtual Worlds,” IEEE Computer Graphics and Applications, vol. 21, no. 6, pp. 64-70, 2001.
[5] A.D. Cheok, K.H. Goh, W. Liu, F. Farzbiz, S.W. Fong, S.Z. Teo, Y. Li, and X. Yang, “Human Pacman: A Mobile Wide-Area Entertainment System Based on Physical, Social, and Ubiquitous Computing,” Personal and Ubiquitous Computing, vol. 8, no. 2, pp. 71-81, 2004.
[7] T.H.D. Nguyen, T.C.T Qui, K. Xu, A.D. Cheok, S.L Teo, Z.Y. Zhou, A. Mallawaarachchi, S.P. Lee, W. Liu, H.S. Teo, L.N. Thang, Y. Li, and H. Kato, “Real-Time 3D Human Capture System for Mixed-Reality Art and Entertainment.” IEEE trans. Visualization and Computer Graphics, vol. 11, no. 6, pp. 706-721, 2005.
[8] C.-R. Huang, C.-S. Chen, and P.-C. Chung, “Tangible Photorealistic Virtual Museum,” IEEE Computer Graphics and Applications, vol. 25, no. 1, pp.15-17, 2005.

延伸閱讀