透過您的圖書館登入
IP:3.144.253.161
  • 學位論文

利用任意縮時影片作為參考進行對於靜態圖片之動畫生成

Time Flies: Animating a Still Image with Time-Lapse Video as Reference

指導教授 : 邱維辰

摘要


縮時影片是日常生活中常見且頗具視覺吸引力的影片類型,其通常使用短時間的影片長度來表示同一地點長時間的動態變化。然而縮時影片的拍攝通常會遇到許多現實的困難,除了耗日費時之外,對於攝影機的穩固架設需要有極高的要求。在這篇論文中,我們提出了一個利用單一靜態圖片、和任意縮時影片為參考、來生成縮時影片的方法。換句話說,我們的主要想法是從參考影片中提取不同物件類別中隨時間變化的風格和特徵,並將它們轉換到輸入的靜態圖片上,最終生成一段擁有與靜態圖片一致場景但具有與參考影片相似之動態變化的縮時影片。我們所提出的方法為基於自監督式的訓練方式,並且為了確保最終生成影片裡的時間一致性/流暢度和真實性,我們於方法架構中引入了幾種創新的設計,包含了不同物件類別間的引入雜訊之實例正規化(Noise Adatpive Instance Normalization)、光流損失函數(Flow loss)、和影片層級之對抗式學習機制(Video-based Adversarial Learning)。與數個現存的風格轉換方法相比,我們提出的方法不僅計算效率高,還能夠創建更逼真且流暢的縮時影片,並且在時間一致性上也能夠很好的保有參考影片的動態變化特徵。

並列摘要


Time-lapse videos usually perform eye-catching appearances but are often hard to create. In this thesis, we propose a self-supervised end-to-end model to generate the time-lapse video from a single image and a reference video. Our key idea is to extract both the style and the features of temporal variation from the reference video, and transfer them onto the input image. To ensure both the temporal consistency and realness of our resultant videos, we introduce several novel designs in our architecture, including classwise NoiseAdaIN, flow loss, and the video discriminator. In comparison to the baselines of state-of-the-art style transfer approaches, our proposed method is not only efficient in computation but also able to create more realistic and temporally smooth time-lapse video of a still image, with its temporal variation consistent to the reference.

參考文獻


[1] Yichang Shih et al. “Data-driven hallucination of different times of day from a single
outdoor photo”. In: ACM Transactions on Graphics (TOG) (2013).
[2] Seonghyeon Nam et al. “End-to-end time-lapse video synthesis from a single outdoor
image”. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[3] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. “Image style transfer

延伸閱讀