透過您的圖書館登入
IP:3.22.74.39
  • 學位論文

風格創造器-應用多條件生成對抗模型於上衣時尚風格設計

Style Creator-Generate Multi-Fashion-Style on clothes using the Multi-Domain Conditional Generative Adversarial Nets

指導教授 : 周建興

摘要


設計師在設計衣服時往往需要靈感的激發而顧客在選擇衣服樣式時同樣也需要多重參考,但現實是設計師收集素材、顧客搜尋衣服就花費相當多時間。近年來,GAN(生成對抗模型)在時尚風格設計上有顯著的效果也越來越多相關的模型出現,但是大多都是選取目標服裝進行服裝轉換到輸入圖片上或是直接更換臉部讓目標服飾轉換到使用者身上,這些都是針對「服裝匹配」,鮮少是直接針對「服裝風格」而設計,而針對服裝風格轉換的模型又僅能選取兩種風格轉換,這讓每一個風格都要花費時間重新訓練,為解決上述問題本文設計了一個有效生成多風格上衣的多條件生成對抗網路模型。   本文提出的方法可以讓使用者輸入一張圖片後輸出多種基於原圖服飾進行風格轉換的結果,例如:素面,格子,條紋,圓點也包含底色更換,除了單種類風格轉換可以選擇多種類混搭,例如:底色更換加原點、格子加條紋、原點加條文;這可以讓設計師及顧客直接獲取多種類的風格圖片以及混搭風格,讓靈感激發變得更迅速方便。

並列摘要


Designers often need inspiration when designing clothes, and customers also need multiple references when choosing clothes, but the reality is designers and customers spend a lot of time collecting materials and searching clothes they want. Recent studies have shown remarkable success in fashion style translation and more related paper had presented, like translating input image’s cloth to selecting target clothing or changing input image’s face to the target image directly, but most of them are designed for “clothing matching”, not designed for “clothing style”. And the model for clothing style translation can only select two styles, which makes each style must take time to retrain. To solve the problems, this paper proposes a multi-condition generation adversarial network model that generates a multi-style clothes effectively. Which allows users to generate a variety of style translated results based on the original image, e.g., plain surface, grid, stripes, dots, color changed.In addition to a single type of style translation, you can choose a variety of types Mashups, e.g., color changed mix origin, plaid mix stripes, origin mix stripes; this allows designers and customers to directly obtain a variety of style pictures and mashup styles, making inspiration more rapid and convenient.

並列關鍵字

Style Transform Human-Parsing CGAN StarGAN

參考文獻


[1] Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS) , 2014 , pp. 2672–2680.
[2] X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Be-longie. Stacked generative adversarial networks. In The IEEE Conference on Computer Vision and Pattern Recog-nition (CVPR), July 2017
[3] A. Radford, L. Metz, and S. Chintala. Unsupervised repre-sentation learning with deep convolutional generative adver-sarial networks. arXiv preprint arXiv:1511.06434,2015
[4] T. Karras, T. Aila, S. Laine, and J. Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017.
[5] J. Zhao, M. Mathieu, and Y. LeCun. Energy-based genera-tive adversarial network. In5th International Conference on Learning Representations (ICLR), 2017.

延伸閱讀