透過您的圖書館登入
IP:3.144.232.160
  • 學位論文

透過具空間導引之子畫面視覺化呈現觀看360影片時視野外的趣味焦點

Outside-In: Visualizing Out-of-Sight Region-of-Interests in a 360 Video Using Spatial Picture-in-Picture Previews

指導教授 : 陳炳宇

摘要


360 影片完整紀錄了環境的所有內容,然而為了提供自然的觀看體 驗,不論是透過螢幕或是頭戴顯示器 (HMDs),使用者通常只能看到其 中的一小部分。當影片的趣味焦點 (ROIs) 落在可視範圍 (FOV) 之外, 也就是目前視窗之外時,會造成使用者搜尋的困難,另外使用者也可 能花時間在找不存在的趣味焦點。 我們提出了一個視覺化的方法「Outside-In」,透過具空間引導的子 畫面 (PIP) 將視窗外的趣味焦點重新引入主視窗,並利用子畫面視窗的 幾何性質進一步呈現趣味焦點與主畫面的相對方位,讓使用者可以有 效的瀏覽。為了減緩多個子畫面彼此遮擋的問題,我們使用快速序列 視覺呈現 (RSVP) 的方法,確保每個子畫面在分到的時間內可以完整的 被看到。我們透過使用者實驗找到在維持可讀性的前提下,適當的子 畫面的幾何變化以及適當的 RSVP 頻率。在實際應用上,我們提供了 觸控螢幕上 Outside-In 的互動方式以及遠端視訊的使用情境。

關鍵字

360影片 遠端視訊 子畫面 視窗外 多焦點

並列摘要


360 videos contain full field of environmental content, however, brows- ing the videos, either on screens or through head-mounted displays (HMDs), suggests users to consume only a subset of the full field for natural view- ing experience. This causes a search problem when the region-of-interests (ROIs) in a video are outside of the current field of view (FOV) on the screen, or users may search for non-exist ROIs. We propose Outside-In, a visualization technique which re-introduces off-screen ROIs into the main screen as spatial picture-in-picture (PIP) previews. The geometry of the preview windows further encodes the ROIs’ relative directions to the main screen view, allowing for effective navigation. To mitigate occlusion caused by multiple PIPs, we further introduce rapid serial visual presentation (RSVP) to ensure that each preview is fully visible within divided time. Our user studies identified effective designs to maintain readability in preview geometry and the RSVP interval. Two applications demonstrate using Outside-In for effective 360 video navigation on touchscreens and in live telepresence.

並列關鍵字

360 Videos Telepresense Picture-in-picture Off-screen Targets

參考文獻


[4] Sean Gustafson, Patrick Baudisch, Carl Gutwin, and Pourang Irani. Wedge: Clutter- free visualization of off-screen locations. In Proc. ACM CHI ’08, pages 787–796, 2008.
[1] Shunichi Kasahara and Jun Rekimoto. Jackin: Integrating first-person view with out-of-body vision generation for human-human augmentation. In Proc. ACM AH ’14, pages 46:1–46:8, 2014.
[3] Sean G. Gustafson and Pourang P. Irani. Comparing visualizations for tracking off- screen moving targets. In Proc. ACM CHI ’07 EA, pages 2399–2404, 2007.
[5] Andreas Girgensohn, Frank Shipman, Thea Turner, and Lynn Wilcox. Effects of presenting geographic context on tracking activity between cameras. In Proc. ACM CHI ’07, pages 1167–1176, 2007.
[6] Y. Wang, D. M. Krum, E. M. Coelho, and D. A. Bowman. Contextualized videos: Combining videos with environment models to support situational understanding. IEEE Transactions on Visualization and Computer Graphics, 13(6):1568–1575, 2007.

延伸閱讀