在本論文中,我們研究了沒有目標圖像監督下的文本引導圖像編輯問題。在僅觀察輸入圖像、使用者給定指令和對應圖像之物件類別標籤,我們提出了一種迴圈式編輯GAN (cManiGAN) 來解決這一具有挑戰性的任務。首先,通過引入一個圖像-文本跨模態解釋器,用相應的指令對輸出圖像進行比對驗證,我們能夠為訓練圖像生成器提供單詞級的訓練反饋。此外,迴圈式編輯一致性的假設進一步用於圖像處理,它結合了『撤消』指令,用於處理後的輸出以還原輸入圖像,能夠在像素級別提供額外的監督。我們在CLEVR 以及COCO 的數據集上進行了廣泛的實驗。雖然後者由於其多樣化的視覺和語義信息而特別具有挑戰性,但我們在兩個數據集上的實驗結果證實了我們提出的方法的有效性和普遍性。
In this thesis, we study the problem of text-guided image manipulation without ground truth image supervision. With only the input image, desirable instruction, and object labels observed, we propose a Cyclic-Manipulation GAN (cManiGAN) for tackling this challenging task. By introducing an image-text cross-modal interpreter authenticating output images with the corresponding instruction, we are able to provide word-level training feedback for training the image generator. Moreover, an operational cycle-consistency is further utilized for image manipulation, which synthesizes the “undo” instruction for recovering the input image based on the manipulated output, offering additional supervision at the pixel level. We conduct extensive experiments on the datasets of CLEVR and COCO datasets. While the latter is particularly challenging due to its diverse visual and semantic information, our experimental results on both datasets confirm the effectiveness and generalizability of our proposed method.