透過您的圖書館登入
IP:3.145.97.248
  • 學位論文

使用深度學習預測光場圖像深度分布

Estimate Disparity of Light Field Images by Deep Neural Network

指導教授 : 歐陽明

摘要


在本論文中,我們使用深度學習來預測光場圖像的深度。光場相機可以同時拍攝一個場景的光線特性:包含空間以及角度。藉由拍攝到的這些資訊,我們可以去預估場景的深度。但是光場相機結構上圖像與圖像之間狹窄的baseline造成深度估計上面的困難,現在很多方法都想要解決這個硬體上面的限制,不過仍然需要在執行速度以及估計的準確率上面達到平衡。 因此,本論文考慮了光場圖像在資料上面的結構性以及圖像上的重複性,將這些特性的概念設計到我們的深度學習網路當中。再來我們提出了attention based sub-aperture view selection來讓網路自行學習哪一些圖像對於深度估計的貢獻是更大的,最後我們比較了在benchmark上和其他states of the art方法之間的比較,來顯示我們對於這個題目的改進。

關鍵字

光場 深度學習 深度估計

並列摘要


In this paper, we introduce a light field depth estimation method based on a convolutional neural network. Light field camera can capture the spatial and angular properties of light in a scene. By using this property, we can compute depth information from light field images. However, the narrow baseline in light-field cameras makes the depth estimation of light field difficult. Many approaches try to solve these limitations in the depth estimation of the light field, but there is some trade-off between the speed and the accuracy in these methods. We consider the repetitive structure of the light field and redundant sub-aperture views in light field images. First, to utilize the repetitive structure of the light field, we integrate this property into our network design. Second, by applying attention based sub-aperture views selection, we can let the network learn more useful views by itself. Finally, we compare our experimental result with other states of the art methods to show our improvement in light field depth estimation.

並列關鍵字

Light Field Deep Neural Network Disparity Depth

參考文獻


[1] E. H. Adelson and J. Y. A. Wang. Single lens stereo with a plenoptic camera. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14(2):99–106,Feb 1992. 1
[2] A. Alperovich, O. Johannsen, M. Strecke, and B. Goldluecke. Light field intrinsics with a deep encoder-decoder network. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9145–9154, June 2018. 5, 12, 14
[3] R. C. Bolles, H. H. Baker, and D. H. Marimont. Epipolarplane image analysis: An approach to determining structure from motion. In INTERN..1. COMPUTER VISION, pages 1–7, 1987. 4
[4] J.-R. Chang and Y.-S. Chen. Pyramid stereo matching network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5410–5418,2018. 7, 8, 9, 10, 11
[5] C. Chen, H. Lin, Z. Yu, S. B. Kang, and J. Yu. Light field stereo matching using bilateral statistics of surface cameras. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 1518–1525, June 2014. 5

延伸閱讀