透過您的圖書館登入
IP:3.22.51.241
  • 學位論文

基於關注物體考量之雙魚眼鏡頭全景拼接

Salient Object-aware Panorama Stitching of Dual-fisheye Camera

指導教授 : 林嘉文

摘要


最近全景雙魚眼鏡頭被廣泛地使用,像是在極限運動,虛擬實境以及會議室鏡頭等等,以會議室鏡頭為例,對於維持每一位與會者的臉的完整性是非常重要的,因為在會議室中,人屬於顯著且重要的物體。也就是說,當有人移動經過兩顆鏡頭重疊的部分時,想將拼接的結果接得無痕跡且無瑕疵就變得相當有挑戰性。 其中Warping跟Seam finding這兩階段對於最後全景的結果好壞扮演十分重要的角色。然而目前現有的方法在實作這兩階段並未將移動物體的資訊考慮進去,特別是在Seam finding的部分,因為在影片中,接縫處位置的變化會直接影響視覺的感受,所以我們除了一般常用的約束項之外,更針對移動和重要的物體設計了演算法。 在本篇論文裡,我們提出了基於移動及重要物體的魚眼鏡頭全景拼接演算法。首先,為了使得Warping裡的對準項目更能集中在移動的物體的對應點,我們從對應點的座標位置計算moving weight m_t。第二,合適的接縫位置有助於減少拼接瑕疵,我們根據前後禎的warping結果算出adaptive weight λ_t。最後,針對移動或重要物體,我們從人物偵測及切割的結果,給予Seam finding中的能量圖相對應的懲罰。

並列摘要


Recently, dual-fisheye camera is widely used in lots of occasion, e.g., extreme sports, virtual reality or meeting room camera. For utilizing in meeting room as example, it is important to keep every attendees’ face complete because human is salient object in this occasion. That is, it becomes a challenge to stitch seamless and no defect when a person moving across the overlapping region between two fisheye cameras. Warping and seam finding phases play significant character to the final panorama result. However, existing method don’t take the information of moving object into consideration when doing these two phases, especially in the seam finding phase. In video, the changes of seam position influence the visual experience dramatically. In addition to normal last frame regularization, we design algorithm that works on moving and salient object. In our method, we propose a moving and salient object aware stitching algorithm for dual-fisheye camera. First, to make alignment term of warping could more focus on matching points lands on moving object, we calculate the moving weight m_t from matching points’ coordinates. Second, appropriate seam position lead more less defect from warping error. Thus, we compare the warping result between frame to frame and get adaptive weight λ_t for seam regularization term. Third, to address better stitching result on salient object, we add salient object term in the form of corresponding energy punishment into seam finding from human detection and segmentation.

參考文獻


1. “LUNA 360 Camera” [Online]. Available: http://luna.camera/.
2. D. G. Lowe, “Distinctive Image Features from Scale-invariant Key-points,” Proc. International journal of computer vision 60 (2), pp. 91-110, 2004.
3. D. DeTone, T. Malisiewicz, & A. Rabinovich, “SuperPoint: Self-Supervised Interest Point Detection and Description,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 337-349.
4. E. Rublee, V. Rabaud, K. Konolige, & G. Bradski, “Orb: An Efficient alternative to sift or surf,” in Computer Vision (ICCV), 2011 IEEE international conference on. IEEE, 2011, pp. 2564-2176.
5. G. Sharma, W. Wu, & E. N. Dalal, “The CIEDE2000 color‐difference formula: Implementation notes, supplementary test data, and mathematical observations,” Color Research & Application: Endorsed by Inter‐Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur, 30. 1, 2005, 21-30.

延伸閱讀