透過您的圖書館登入
IP:216.73.216.100
  • 學位論文

基於二階段注意力機制條件式生成對抗網路與筆畫式引導的美術上色

Stroke-guided Artistic Colorization using Two-stage Attentional cGANs

指導教授 : 林奕成

摘要


影像上色是個⻑久以來的課題,而且在當今娛樂產業蓬勃發展的時代,對於畫作的 上色研究非常有實際用途。近年來,生成對抗網路的興起,讓影像上色有了極大的進 展。然而,這些上色方法通常是針對寫實的照片,或者是針對線稿,少有方法針對灰 階草圖進行著色處理。 我們提出一個二階段式、使用者引導的深度學習架構,以處理非真實的美術畫作上色。此系統主要使用 U-Net 式生成對抗網路,並在網路中引入自我注意力機制。在一開始,使用者對系統輸入未著色的草圖,還有幾筆具有色彩的提示筆畫。在第一階段,系統會輸出一張具有明暗對比的灰階圖片。而第二階段的神經網路模型會將這張灰階圖片進行著色,最後輸出一張完整的彩色影像。如此一來,我們便可以處理線稿草圖,也可以處理灰階草圖,並針對草圖不同的風格進行上色。

並列摘要


Image colorization is a long-standing problem, and it can become practical use when we refer in particular to artwork colorization due to the flourishing growth of the entertainment industry. Recently, the prosperity of generative adversarial networks (GANs) makes this research field a great progress. However, these approaches mainly focus on realistic photo colorization or artistic line art colorization and rarely pay attention to artistic grayscale colorization. We propose a two-stage deep learning architecture for user-guided artistic image colorization using self-attentional adversarial U-nets with uncolored drawings and conditional scribble-based hint inputs. At the start, the user gives the GAN model several color strokes as conditional hints. In the first stage, the system learns the lighting and output a shaded grayscale image from the input line art. In the second stage, the modelcolorizes the grayscale image into a complete colored picture. With our proposed models, we can handle both line art and grayscale inputs and generate colorful artistic paintings adapted to different styles.

參考文獻


[1] Anat Levin, Dani Lischinski, and Yair Weiss. “Colorization Using Optimization”. In: ACM Trans. Graph. (TOG) 23.3 (Aug. 2004), pp. 689–694.
[2] Yingge Qu, Tien-Tsin Wong, and Pheng-Ann Heng. “Manga Colorization”. In: ACM Trans. Graph. (TOG) 25.3 (July 2006), pp. 1214–1220.
[3] Taizan Yonetsuji. PaintsChainer. 2016. url: http://github.com/pfnet/PaintsChainer.
[4] Richard Zhang et al. “Real-time User-guided Image Colorization with Learned Deep Priors”. In: ACM Trans. Graph. (TOG) 36.4 (July 2017), 119:1–119:11.
[5] Yuanzheng Ci et al. “User-Guided Deep Anime Line Art Colorization with Conditional Adversarial Networks”. In: Proc. of the 26th ACM International Conference on Multimedia (MM ’18). Seoul, Republic of Korea: ACM, 2018, pp. 1536–1544.

延伸閱讀