透過您的圖書館登入
IP:18.191.103.248
  • 學位論文

在自然場景下之虛擬試穿

Virtual Try-On in the Wild

指導教授 : 莊永裕
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


虛擬試穿將衣服商品及使用者之影像生成出該使用者穿試該衣服之結果影像。是近年進步不少的的研究方向,但還是很難實作在真實用途的場合。一個好的虛擬試穿不僅需要能合理的把一件以上的衣服影像試穿在使用者身上,更需要考慮在拍攝商品影像跟使用者影像之間存在的光線差異也要考慮進去。但是,之前的不論以生成為基礎的或者是以扭曲為基礎的研究都沒有考慮以上實際現實場合會碰到的問題。他們皆不能處理使用者要分層穿著風格所帶來的商品影像與試穿者身上的巨大差異,也沒考慮到以上所說之影像之間的光線差。因此我們基於以上問題提出一個新方法去處理,針對分層穿著所帶來之差異的部分,利用人類理解知識去改變屬於該衣服的區域從而扭曲至合理影像。而針對直接忽略衣服與使用者影像之光線差來利用平均絕對誤差之損失函數所帶來生成影像中含有假影之問題,利用使用者影像能取得的光線資訊以及在增加一個內在影像分解模塊去避免上述直接對結果影像與基準真相作平均絕對誤差之損失函數的訓練所會帶來的問題。

並列摘要


Virtual try-on which puts a new in-shop clothes into a person image has made a great progress on research domain, yet is still challenging to apply on real-case scenario. A desirable pipeline which can be adopted in the wild should not only transform one article from the outfit into fitting shape seamlessly. Moreover, it should take account of the lighting source difference produced during taking shot of the in-shop clothes and the clothes worn in the wild. However, previous generation-based and warping-based works fail to meet these critical requirements towards the plausible full-body virtual try-on performance. They fail to handle layered-wearing style which resulted in large spatial misalignment between the input image and target clothes. Also, the intensity difference between same clothes taken from different lighting source are not addressed. Thus, we propose a new Full-Body Virtual Try-On Network for real-world challenge. First, FB-VTON learns to warp the article into the {\em intuited modified body segment} of the target person rather than given body segment. Second, to solve the problem of artifacts generated from using L1 loss when training the unpaired lighting source between the same article, we employ a {\em image decomposition module} and take account of the lighting information given from the input to generate the proper result image.

參考文獻


C. Ge, Y. Song, Y. Ge, H. Yang, W. Liu, and P. Luo. Disentangled cycle consistency for highly-realistic virtual try-on. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16928–16937, 2021.
C. Ge, Y. Song, Y. Ge, H. Yang, W. Liu, and P. Luo. Disentangled cycle consistencyfor highly-realistic virtual try-on. InProceedings of the IEEE/CVF Conference onComputer Vision and Pattern Recognition, pages 16928–16937, 2021.
X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis. Viton: An image-based virtual try-on network. In CVPR, 2018.
N. Jetchev and U. Bergmann. The conditional analogy gan: Swapping fashion articles on people images. In The IEEE International Conference on Computer Vision (ICCV) Workshops, Oct 2017.
Y. Kanamori and Y. Endo. Relighting humans: Occlusion-aware inverse rendering for full-body human images. ACM TRANSACTIONS ON GRAPHICS, 37(6), 11 2018.

延伸閱讀