本論文可以分為兩個部分: 從一維資料重建三維人體模型與適合神經網路學習的衣服生成方法與其動畫模擬。 在第一個部分,我們介紹一個基於 SMPL 模型 [8] 的人體外型(body shape) 的神經網路,SMPL 是人體參數模型可以產生各式各樣的三維人體模型,而我們的模型藉由輸入一維的資料給 SMPL 以產生三維人體,而一維資料包含身高、胸圍、腰圍、手長、腳長等等資訊。近年來有越來越多的論文試圖從影像中分析人體的外型與姿勢,但是在某些應用不夠精確。我們的模型是第一個透過一維資料產生三維人體資料。 在第二個部分,我們展示從樣板模型變形成各式各樣衣服的方法,因此有相同的頂點數量與三維拓樸結構,這對神經網路的訓練極為重要,因為神經網路往往要求訓練資料的維度必須一致。
In this work, the works can be treated as two part: 3D human body reconstructed from 1D data and cloth generation method for neural network and animation. In the first part, we present a learned model of human body shape, layered on top of the SMPL model [8], which can generate varied kind of 3D human body, from a few 1D measurements. The 1D measurements are the height, waist girth, leg length, arm length, etc. There are more and more papers trying to get the 3D human body from 2D images but lacking enough precision for some applications. This system is the first one that use 1D data rather than 2D images or 3D data to train the network generating human 3D model. In the second part, we show a method to generate multiple different cloth from the template, and the generated cloth has same number of vertices and topology which are essential for training the neural network requiring the same input size.