透過您的圖書館登入
IP:18.227.228.95
  • 學位論文

全景影片視覺顯著性預測與視覺偏差

Viewing Bias Matters in 360◦ Videos Visual Saliency Prediction

指導教授 : 吳沛遠

摘要


全景影片已經被廣泛應用於沈浸式內容、虛擬導覽和監控系統等許多領域,相較於平面影片,全景影片涵蓋了更多的資訊,要在資訊爆炸的全景影像中預測出顯著性區域更為困難。本文中,我們提出了一個視覺顯著性預測模型,它可以直接預測等距長方投影影片中的顯著性區域。過去的方法採用循環神經網路的架構作為視覺顯著性預測模型,不同於過去的方法,我們使用三維卷積於編碼器並泛化SphereNet卷積核以構建解碼器。我們進一步分析存在於不同全景影片資料集以及不同類型全景影片中視覺偏差的資料統計性,這為我們提供了對融合機制設計的見解,該融合機制以自適應方式將預測的顯著圖與視覺偏差相融合。我們提出的模型在各個資料集(例如:Salient360!,PVS,Sport360)都有最佳的結果。

並列摘要


360◦ video has been applied in many areas such as immersive content, virtual tours, and surveillance systems. Comparing to the field of view prediction on planar videos, the explosive amount of information contained in the omni­directional view on the en­tire sphere poses an additional challenge towards predicting high­salient regions in 360◦videos. In this work, we propose a visual saliency prediction model that directly takes 360◦videos in the equirectangular format. Unlike previous works that often adopted recurrent neural network(RNN) architecture towards the saliency detection task, in this work we utilize 3D convolution to a spatial­temporal encoder and generalize SphereNet kernels to construct a spatial­temporal decoder. We further study the statistical properties of viewing biases present in 360◦datasets across various video types, which provides us with insights towards the design of a fusing mechanism that incorporates the predicted saliency map with the viewing bias in an adaptive manner. The proposed model yields state­of­the­arts performance, as evidenced by empirical results over renowned 360◦visual saliency datasets such as Salient360!, PVS, and Sport360.

參考文獻


[1] M. Almquist, V. Almquist, V. Krishnamoorthi, N. Carlsson, and D. Eager. The Prefetch Aggressiveness Tradeoff in 360° Video Streaming, page 258–269. Association for Computing Machinery, New York, NY, USA, 2018.
[2] Y. Bai and D. Wang. On the comparison of trilinear, cubic spline, and fuzzy interpolation methods in the high­accuracy measurements. IEEE Transactions on fuzzy Systems, 18(5):1016–1022, 2010.
[3] A. Borji, H. R. Tavakoli, D. N. Sihite, and L. Itti. Analysis of scores, datasets, and models in visual saliency prediction. In Proceedings of the IEEE international conference on computer vision, pages 921–928, 2013.
[4] Z. Bylinskii, T. Judd, A. Oliva, A. Torralba, and F. Durand. What do different evaluation metrics tell us about saliency models? IEEE transactions on pattern analysis and machine intelligence, 41(3):740–757, 2018.
[5] Q. Chang, S. Zhu, and L. Zhu. Temporal­spatial feature pyramid for video saliency detection. arXiv preprint arXiv:2105.04213, 2021.

延伸閱讀