透過您的圖書館登入
IP:3.136.17.118
  • 學位論文

透過基於生成式人工智慧的適應性敘述者整合現實干 擾於沉浸式虛擬事件

Integrating Real World Distraction into Immersive Virtual Event through Generative AI-based Adaptive Storyteller

指導教授 : 陳炳宇

摘要


本研究針對虛擬現實體驗中的“中斷存在”問題展開探討。“中斷存在”指的是來自現實世界的背景噪音、風、氣味等干擾,可能干擾用戶的沉浸感,使其注意力從虛擬世界轉移到現實感官輸入上。為解決此問題,我們引入了生成式人工智慧作為適應性故事敘述者,將現實世界的干擾融入VR體驗中。 我們將來自感應器的現實世界干擾數據輸入生成式人工智慧,以產生在VR體驗中維持沉浸感所需的情景解釋。生成式人工智慧通過學習大量數據和使用先進的神經網絡,能夠理解複雜的關係、情境的細微差異和風格的微妙之處。這使其能夠有效地生成相關內容,提升了故事的連貫性和效率。 我們提出了一個通用的系統架構,並設計了一個原型進行實驗驗證。研究結果顯示,生成式人工智慧在解決“中斷存在”問題方面具有顯著的潛力。然而,我們也意識到存在技術上的一些限制,並探討了系統改進的可能途徑。 最終,我們相信通過無縫地整合物理世界和虛擬世界,我們可以創造出統一的現實,為用戶提供更具沉浸感的體驗。生成式人工智慧的運用將為虛擬現實技術的未來發展開啟新的可能性。

並列摘要


Break in Presence (BiP) inevitably occurs during virtual reality (VR) experiences, reluctantly bringing real-world distractions that can potentially disrupt user immersion. However, converting diverse and unpredictable real-world distractions into VR events is challenging. In this paper, we tackle the BiP issue by introducing generative artificial intelligence (GenAI) as an adaptive storyteller. In our work, we input real-world distractions into GenAI as a form of measured data from sensors, to produce credible scenario explanation in VR experience to maintain immersion by GenAI’s adaptive creativity. We demonstrate how GenAI can interpret distractions and seamlessly integrate them into the virtual experience. This paper highlights GenAI's potential in addressing BiP in VR experiences. We present a general system architecture, a prototype, and user study results. We also recognize technical limitations and discuss avenues for system improvement. We believe in seamlessly integrating the physical and virtual realms to create united realities for a more immersive experience.

並列關鍵字

Virtual reality Generative AI

參考文獻


[1] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners.Advances in neural information processing systems, 33:1877–1901, 2020.
[2] C.-H. Cheng, C.-C. Chang, Y.-H. Chen, Y.-L. Lin, J.-Y. Huang, P.-H. Han, J.-C. Ko,and L.-C. Lee. Gravitycup: a liquid-based haptics for simulating dynamic weight in virtual reality. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, pages 1–2, 2018.
[3] V. Chheang, R. Marquez-Hernandez, M. Patel, D. Rajasekaran, S. Sharmin, G. Caulfield, B. Kiafar, J. Li, and R. L. Barmaki. Towards anatomy education with generative ai-based virtual assistants in immersive virtual reality environments. arXiv preprint arXiv:2306.17278, 2023.
[4] I. Endo, K. Takashima, M. Inoue, K. Fujita, K. Kiyokawa, and Y. Kitamura. Modularhmd: a reconfigurable mobile head-mounted display enabling ad-hoc peripheral interactions with the real world. In The 34th Annual ACM Symposium on User Interface Software and Technology, pages 100–117, 2021.
[5] C. Fang, Y. Zhang, M. Dworman, and C. Harrison. Wireality: Enabling complex tangible geometries in virtual reality with worn multi-string haptics. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–10, 2020.

延伸閱讀