透過您的圖書館登入
IP:216.73.216.60
  • 學位論文

非目標式之由文本控制的影片操縱技術

Target-free Text-guided Image Manipulation

指導教授 : 王鈺強
共同指導教授 : 陳祝嵩 邱維辰(Wei-Chen Walon Chiu)
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


在本論文中,我們研究了沒有目標圖像監督下的文本引導圖像編輯問題。在僅觀察輸入圖像、使用者給定指令和對應圖像之物件類別標籤,我們提出了一種迴圈式編輯GAN (cManiGAN) 來解決這一具有挑戰性的任務。首先,通過引入一個圖像-文本跨模態解釋器,用相應的指令對輸出圖像進行比對驗證,我們能夠為訓練圖像生成器提供單詞級的訓練反饋。此外,迴圈式編輯一致性的假設進一步用於圖像處理,它結合了『撤消』指令,用於處理後的輸出以還原輸入圖像,能夠在像素級別提供額外的監督。我們在CLEVR 以及COCO 的數據集上進行了廣泛的實驗。雖然後者由於其多樣化的視覺和語義信息而特別具有挑戰性,但我們在兩個數據集上的實驗結果證實了我們提出的方法的有效性和普遍性。

並列摘要


In this thesis, we study the problem of text-guided image manipulation without ground truth image supervision. With only the input image, desirable instruction, and object labels observed, we propose a Cyclic-Manipulation GAN (cManiGAN) for tackling this challenging task. By introducing an image-text cross-modal interpreter authenticating output images with the corresponding instruction, we are able to provide word-level training feedback for training the image generator. Moreover, an operational cycle-consistency is further utilized for image manipulation, which synthesizes the “undo” instruction for recovering the input image based on the manipulated output, offering additional supervision at the pixel level. We conduct extensive experiments on the datasets of CLEVR and COCO datasets. While the latter is particularly challenging due to its diverse visual and semantic information, our experimental results on both datasets confirm the effectiveness and generalizability of our proposed method.

參考文獻


[1] A. El-Nouby, S. Sharma, H. Schulz, D. Hjelm, L. E. Asri, S. E. Kahou, Y. Bengio, and G. W. Taylor, “Tell, draw, and repeat: Generating and modifying images based on continual linguistic instruction,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 10 304–312. viii, 1, 2, 3, 5, 16, 17, 18
[2] T. Zhang, H.-Y. Tseng, L. Jiang, W. Yang, H. Lee, and I. Essa, “Text as neural operator: Image manipulation by text instruction,” in Proceedings of the 29th ACM International Conference on Multimedia, 2021, pp. 1893–1902. viii, 1, 2, 3, 6, 8, 15, 16, 17, 18
[3] B. Li, X. Qi, T. Lukasiewicz, and P. H. Torr, “Manigan: Text-guided image manipulation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 7880–7889. viii, 1, 3, 5, 6, 16, 17,
[4] W. Xia, Y. Yang, J.-H. Xue, and B. Wu, “Tedigan: Text-guided diverse face image generation and manipulation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2256–2265. viii, 1, 3, 5, 6, 16, 17, 18

延伸閱讀