透過您的圖書館登入
IP:18.217.135.23
  • 學位論文

以雙魚眼相機產生360度影像與光場

Generating 360° Image and Light Field with a Dual-Fisheye Camera

指導教授 : 陳宏銘
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


虛擬實境(VR)與擴增實境(AR)是未來科技發展的大趨勢之一,這兩項技術提供了虛擬世界和現實世界的橋樑,讓使用者能以最自然的方式暢遊虛擬空間。為了能讓使用者達到身歷其境的感受,我們需要一種攝影技術能夠記錄全視角且細緻的多媒體,稱為360 度光場。本論文將探討一種新穎並且有效率的方式產生360 度光場,這種技術能夠產生完整環景視野和高角分辨率,完整的紀錄真實世界的每一道光線。我們將從360 度全景攝影術出發,探討如何以克服魚眼畸變的問題,以最有效率的方式縫合雙魚眼影像,並利用單一台簡單輕巧的雙魚眼相機獲取360 度光場,透過深層網路架構,有效率產生360 度高品質光場。 本論文可分為三個部分。第一部分集中於探討一背對背雙魚眼相機的校準。我們開發了一個基於同心軌跡的幾何校準模型,並針對色差的問題提出強度和顏色補償的校正模型,提供有效和準確的局部顏色轉移。第二部分描述了360 度影像和影片縫合方法。具體來說,我們開發了網格變形模型以及用於影片縫合的自適應切割線,以減少幾何失真並確保最佳縫合。第三部分描述如何利用雙魚眼相機產生360 度光場。我們將複數張360 度原始影像輸入深層網路生成360度光場。這個深層網路有三個要素:能對生成的360 度光場的子影像施加時空一致性約束的捲積網路、能提高視差估計精度的等距長方投影匹配損失函數、以及360 度光場重新取樣網路。這項技術能使單一個簡單輕巧的魚眼相機產生出高品質的360度光場。

並列摘要


The second part of the dissertation describes our solution for image and video stitching for dual-fisheye cameras. Specifically, we develop a photometric correction model for intensity and color compensation to provide efficient and accurate local color transfer, and a mesh deformation model along with an adaptive seam carving method for image stitching to reduce geometric distortion and ensure optimal spatiotemporal alignment. The stitching algorithm and the compensation algorithm can run efficiently for 1920×960 images. The third part of the dissertation describes an efficient pipeline for light field acquisition using a dual-fisheye camera. The proposed pipeline generates a light field from a sequence of 360° images captured by the dual-fisheye camera. It has three main components: a convolutional network (CNN) that enforces a spatiotemporal consistency constraint on the subviews of the 360° light field, an equirectangular matching cost that aims at increasing the accuracy of disparity estimation, and a light field resampling subnet that produces the 360° light field based on the disparity information. We demonstrate the effectiveness, robustness, and quality of the proposed pipeline using real data obtained from a commercially available dual-fisheye camera.

參考文獻


S. Chan, X. Zhou, C. Huang, S. Chen, and Y. Li, “An improved method for fisheye camera calibration and distortion correction,” in 2016 International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, 2016, pp. 579–584.
T. Ho, I. D. Schizas, K. R. Rao, and M. Budagavi, “360-degree video stitching for dual-fisheye lens cameras based on rigid moving least squares,” in 2017 IEEE International Conference on Image Processing (ICIP), 2017, pp. 51–55.
S. Avidan and A. Shamir, “Seam carving for content-aware image resizing,” in ACM SIGGRAPH 2007 papers, 2007, pp. 10–es.
N. H. Wang, B. Solarte, Y. H. Tsai, W. C. Chiu, and M. Sun, “360sd-net: 360° stereo depth estimation with learnable cost volume,” in Proc. International Conference on Robot. and Automat, 2020, pp. 582–588.
H. Jiang, Z. Sheng, S. Zhu, and R. H. Z. Dong, “Unifuse: Unidirectional fusion for 360° panorama depth estimation,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 1519–1526, 2021.

延伸閱讀