近年來因深度生成對抗網路的興起,在影像復原的問題上都能得到相較於傳統方法更好與更真實的結果。然而,一般深度學習的方法需要巨量的訓練參數,以及無法應用於補全多種形式的影像破損與遺失。除此之外,一般深度自編碼器與對抗網路結合時容易面臨不穩定的訓練過程,或是只學習到將所有的輸入影像都對應至某特定結果。在這篇論文中,我們提出透過建構輕量型的條件生成對抗網路,以及結合更穩定的對抗訓練方式,來應用在解決各式各樣的影像破損情況,將其修補為更真實,完美的影像。另外,我們也提出新的訓練策略來促使深度模型學習擷取影像具有代表性的特徵,以便修復各種不同的破損。在實驗中,我們驗證了論文所提出的深度模型相較於其他深度學習方法所需的訓練參數是最少的。而且在量化數據以及視覺化上,我們提出的方法在各種類型的資料集中都優於傳統以及深度學習方法。在應用面,我們也展示了能在不同解析度以及使用者自定義的破損影像中,依然能夠修補完整。
Recent image completion researches using deep neural networks approaches have shown remarkable progress by using generative adversarial networks (GANs). However, these approaches still suffer from the problems of large model sizes and lack of generality for various types of corruptions. In addition, the conditional GANs often suffer from the mode collapse and unstable training problems. In this thesis, we overcome these shortcomings in the previous models by proposing a lightweight model of conditional GANs and integrating a stable adversarial training strategy. Moreover, we present a new training strategy to train the model to learn how to complete different types of corruptions or missing regions in images. Experimental results demonstrate qualitatively and quantitatively that the proposed model provides significant improvement over state-of-the-art image completion methods on public datasets. In addition, we show that our model requires much less model parameters to achieve superior results for different types of unseen corruption masks.