透過您的圖書館登入
IP:3.147.69.50
  • 學位論文

基於生成對抗網路的不同風格線稿著色

Stylized Colorization for Line-Art with Generative Adversarial Networks

指導教授 : 賴尚宏

摘要


上色風格占繪畫的過程中很重要的一部分,不論是觀賞繪本、漫畫或是各種形式的繪畫作品,讀者們都能特過上色風格感受到作者想表達的不同氛圍。隨著深度網路技術的發展,自動著色語影像風格轉換方法相繼被提出,然而在現有的方法中並沒有辦法進行端到端不同風格的著色,深度網路不僅同時需要學習著色與風格兩種不同的目標,生成多個領域的圖像轉換也是一項具挑戰性的工作。 在此篇論文,我們提出了一個基於條件生成對抗網路的風格著色模型,我們的生成網路由一個編碼器與一個解碼器所組成,編碼器致力於學習未上色影像的高維特徵資訊,解碼器則是透過這些特徵與我們輸入的風格條件資訊,生成出具有上色風格的著色結果,而在我們的判別網路中,我們提出兩個判別器的架構,一個辨別生成影像的著色,另一個辨別風格的部分,使我們的生成網路能更有效率地同時學習著色與風格兩個目標。實驗結果顯示我們單一的模型能在多領域的生成獲得優異的著色結果,比起先著色再進行風格轉換,我們的模型可以透過更簡單的操作直接生成出結果。

並列摘要


The development of automatic line-art colorization has achieved great improvement after researchers proposed to apply the Generative Adversarial Networks (GANs) to this problem. Using different ways to colorize the line art will generate the illustrations of different styles. Different coloring styles can be used in various situations and scenarios. We can imagine that the styles for picture books, comics, and animations are all distinct. Different coloring styles of illustration could cause different types of feeling to the readers. In this paper, we focus on a new problem of stylized colorization which is to colorize an input line-art by using a specific coloring style. This problem can be considered to be a multi-domain image translation problem. We propose an end-to-end adversarial network for stylized colorization where the model consists of one generator and two discriminators. Our Generator receives a pair of a line-art and a coloring style as its input to produce stylized-colorization image of the line-art. Two discriminators, on the other hand, judge the stylized-colorization images in two different aspects: one for colorization, and one for coloring style. One generator and two discriminators are jointly trained in an adversarial and end-to-end manner. Extensive experiments demonstrate superior colorization results by using the proposed model in comparison with the previous methods.

參考文獻


[1] Pixiv. https://www.pixiv.net/.
[2] Anonymous, community, D., Branwen, G., and Gokaslan, A. Danbooru2018:
A large-scale crowdsourced and tagged anime illustration dataset. https://
www.gwern.net/Danbooru2018, January 2019.
[3] Chen, W., and Hays, J. Sketchygan: Towards diverse and realistic sketch to

延伸閱讀