設計師在設計衣服時往往需要靈感的激發而顧客在選擇衣服樣式時同樣也需要多重參考,但現實是設計師收集素材、顧客搜尋衣服就花費相當多時間。近年來,GAN(生成對抗模型)在時尚風格設計上有顯著的效果也越來越多相關的模型出現,但是大多都是選取目標服裝進行服裝轉換到輸入圖片上或是直接更換臉部讓目標服飾轉換到使用者身上,這些都是針對「服裝匹配」,鮮少是直接針對「服裝風格」而設計,而針對服裝風格轉換的模型又僅能選取兩種風格轉換,這讓每一個風格都要花費時間重新訓練,為解決上述問題本文設計了一個有效生成多風格上衣的多條件生成對抗網路模型。 本文提出的方法可以讓使用者輸入一張圖片後輸出多種基於原圖服飾進行風格轉換的結果,例如:素面,格子,條紋,圓點也包含底色更換,除了單種類風格轉換可以選擇多種類混搭,例如:底色更換加原點、格子加條紋、原點加條文;這可以讓設計師及顧客直接獲取多種類的風格圖片以及混搭風格,讓靈感激發變得更迅速方便。
Designers often need inspiration when designing clothes, and customers also need multiple references when choosing clothes, but the reality is designers and customers spend a lot of time collecting materials and searching clothes they want. Recent studies have shown remarkable success in fashion style translation and more related paper had presented, like translating input image’s cloth to selecting target clothing or changing input image’s face to the target image directly, but most of them are designed for “clothing matching”, not designed for “clothing style”. And the model for clothing style translation can only select two styles, which makes each style must take time to retrain. To solve the problems, this paper proposes a multi-condition generation adversarial network model that generates a multi-style clothes effectively. Which allows users to generate a variety of style translated results based on the original image, e.g., plain surface, grid, stripes, dots, color changed.In addition to a single type of style translation, you can choose a variety of types Mashups, e.g., color changed mix origin, plaid mix stripes, origin mix stripes; this allows designers and customers to directly obtain a variety of style pictures and mashup styles, making inspiration more rapid and convenient.