透過您的圖書館登入
IP:3.137.192.3
  • 學位論文

基於注意模型的圖像擴展

Image Outpainting Based On Attention Model

指導教授 : 顏淑惠

摘要


隨著深度學習技術在圖像修復方面取得了顯著進步,許多研究員將研究重心從影像修復   (image inpainting)轉向到更加有挑戰性的影像擴展(image outpainting)上面。然而注意力   模塊(attention module)雖然在影像修復有著十分顯著的幫助,但常用的注意力模型並不一   定適合套用在影像擴展。在本篇文章中,首先,我們提出了一個三階段對抗模型,具體來說   ,在第一階段生成預測的邊緣圖當作下兩個階段的條件輸入,在第二階段生成粗略的預測圖   當作下個階段的條件輸入,最終在第三階段生成具有合理結構和細節的預測影像。其次,我   們嘗試在影像擴展中使用擠壓-激勵網絡(Squeeze-and-Excitation Net, SENet) )取代傳統的   contextual attention。SENet具有整合全體特徵並調適校準的功能,可以更好地顯示細節紋理   的擴展區域。最後,我們設計一個非固定式的局部鑑別器用於對抗模型,該鑑別器會隨機抓   取一塊包含已知和生成的區域來判斷是否為真實影像。透過這個具有隨機性的鑑別器,我們   的模型能夠自然地從影像邊界擴展並產生與內部保持一致性的影像。

並列摘要


Along the advanced progresses on deep neural networks, there are many impressive results on image inpainting. Consequently, several research are trying to transfer successful experiences into image outpainting. Contextual attention net is one of the popular architectural units being applied to outpainting. We argue that it may not as suitable when embedded in an outpainting network. Instead, we adopt SEnet for it has global receptive field and channel-wise feature recalibration. This is very helpful for image outpainting. We also propose a local discriminator mechanism to decide whether a randomly select partial image is a real one. By ‘randomness’, the generator can produce a realistic result.

參考文獻


[1] K. Nazeri, E. Ng, T. Joseph, F. Qureshi, and M. Ebrahimi. Edgeconnect: Generative image inpainting with adversarial edge learning. arXiv preprint arXiv:1901.00212, 2019.
[2] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), page 5, 2018.
[3] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision (ECCV), pp. 694–711. Springer, 2016.
[4] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2414–2423, 2016.
[5] L. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems, pp. 262–270, 2015.

延伸閱讀