透過您的圖書館登入
IP:18.221.98.71
  • 學位論文

以全景影像生成任意的虛擬視角

Free-viewpoint Synthesis over Panoramic Images

指導教授 : 歐陽明

摘要


本論文提出使用少量的全景影像,就可以產生出使用者想要的虛擬視角,並可透過虛擬實境的裝置檢視內容,以達到沉浸式的體驗。一般上為了記錄一個場景,都需要拍攝大量的影像或者錄製一段影片,但拍攝大量的影像或者影片是一個成本相當高的事情,其中包括時間的消耗、裝置的儲存空間等等。因此,我們希望能提出一個系統和辦法,可以利用少量的影像就能產生足夠描述一個場景且使用者取向的資訊,並且結果是可令人接受的。我們提出的步驟包括:運動恢復結構(Structure from Motion)、影像的校正與深度估計(Image Rectification and Depth Estimation)、 3D 重建(3D Reconstruction)、視角合成(View Synthesis)。我們希望可以先透過運動恢復結構從拍攝到的影像中取得影像之間的旋轉和平移關係,再透過第二步的影像校正成對地算出視差,爾後再轉換成場景的深度。第三步的重建就是以算到的深度重建出 3D 點,然後把這些 3D 點連接起來形成三角形網格(Triangle Meshes)。最後,利用重建好的 3D 資訊產生虛擬視角的影像。

並列摘要


This thesis presents a method for a free viewpoint synthesis with a sparse view of panoramic images. Traditionally, the task of constructing a playback data set to navigate through a scene has required a particularly inefficient procedure. The conventional method of taking pictures and videos with a pinhole camera model is costly due to the slow run time and memory space required. We propose a method which takes advantage of a less costly setup and improves the visual quality of the final images. This method allows users to choose the desired viewpoint, as well as whether the output should be computed as a panoramic or perspective image. This entire procedure consists of four steps: structure from motion (SfM), image rectification and depth estimation, 3D reconstruction, and view synthesis. First, the extinsic parameters of the cameras are extracted by implementing the structure from motion algorithm/technique. Next, image pairs are rectified and their disparities can be computed, which can then be converted into depth maps for 3D reconstruction. Finally, the obtained 3D triangle meshes are transformed to the coordinates of target virtual camera, and the target image can be generated by intersecting rays with meshes.

參考文獻


[1] Opensfm. https://github.com/mapillary/OpenSfM. Dec 2014.
[2] R. Anderson, D. Gallup, J. T. Barron, J. Kontkanen, N. Snavely, C. H. Esteban, S. Agarwal, and S. M. Seitz. Jump: Virtual reality video. 2016.
[3] P. Chang and M. Hebert. Omni-directional structure from motion. In Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704), pages 127–133, 2000.
[4] Y. Furukawa and J. Ponce. Accurate, dense, and robust multiview stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(8):1362–1376, Aug 2010.
[5] J. Huang, Z. Chen, D. Ceylan, and H. Jin. 6-dof vr videos with a single 360-camera. In 2017 IEEE Virtual Reality (VR), pages 37–44, March 2017.

延伸閱讀