透過您的圖書館登入
IP:3.138.154.6
  • 學位論文

以深度學習與多材料4D列印設計製造仿真人臉面具

Design and Fabrication of Realistic Human Masks via Deep Learning and Multi-material 4D Printing

指導教授 : 莊嘉揚
本文將於2029/07/30開放下載。若您希望在開放下載時收到通知,可將文章加入收藏

摘要


本研究利用深度學習模型與4D列印技術搭配形狀記憶材料製造立體的仿真人臉網殼。本研究使用形狀記憶聚合物作為平面網格材料,利用熔融堆疊式的3D列印機列印SMP55 (Shape memory polymer 55) 和PLA (Polylactide),並透過SMP55列印時會儲存的預應力的特性作為4D列印的機制,使平面網格受熱可以向上彎曲、向下彎曲、縮短和自由變形,省下製造支撐材的時間與成本,並用深度學習自動化反向設計仿真人臉面具之平面網格的過程。 本研究重點為利用深度學習反向設計4D仿真人臉面具之平面網格,其中,反向設計為根據物體目標形狀反推組成目標形狀的材料結構。本實驗室先前研究雖成功透過全卷積網路反向設計目標人臉形狀的平面網格材料配置,然而經Dlib人臉偵測模型分析五官比例統計與人眼觀察後,發現其訓練集的人臉與真實人臉比例落差很大,且未做讀入相機拍攝照片作為測試集的接口,導致無法將本研究的模型讀入生活中的照片做為測試,且經實驗證實本實驗室先前研究之模型用於反向設計人臉網格不精準。 因此,本研究利用多層電腦視覺步驟將相機拍攝的人臉照片作為模型測試集,同時參考人臉量測文獻與公開人臉資料庫,透過改變材料配置與五官比例來生成3萬筆與真實人臉比例相似的資料庫面具,並透過有限元素法模擬產生對應的變形形狀作為深度學習模型的資料庫,最後再使用此資料庫訓練全卷積網路。本資料庫使用Dlib過濾無法被辨識為人臉的面具,並經Dlib人臉偵測模型統計五官比例,發現本研究資料庫五官比例與公開人臉資料庫與真實人臉接近。 在設計案例分析中,多項式參數生成的隨機面具方面,模型反向設計出的網格其平均並交比約0.98、像素準確度約0.99,變形後形狀也與目標形狀有0.95結構相似度、4.2的L2範數誤差;相機拍攝與AI生成的人臉照片方面,模型反向設計出的網格變形後形狀與目標形狀有0.8的結構相似度,10.6的L2範數誤差。相比本研究室先前研究反向設計仿真人臉面具的0.59結構相似度,18.9的L2範數誤差,以及OpenCV針對輪廓計算的matchshape數值,本研究的模型大大提升對於仿真人臉面具設計的準確率。實驗部分,本研究使用CloudCompare軟體,對比4D面具實際實驗變形結果與模擬軟體變形結果,發現兩者曲面僅有幾毫米落差,同時也嘗試多種鋪膜方式後,最後選用鐵氟龍膠帶貼於面具表面並利用壓克力顏料著色,完成仿真人臉面具製作。

並列摘要


This study utilizes deep learning model and 4D printing technology combined with shape memory materials to create three-dimensional real human face gridshell. The research employs shape memory polymers as 2D grid material, using a fused deposition modeling (FDM) 3D printer to print SMP55 (Shape memory polymer 55) and PLA (Polylactide). By using the pre-stress stored during the printing of SMP55 as the 4D printing mechanism, the 2D mesh can bend upward, bend downward, shorten, and deform freely upon heating. By the approach, the research can manufacture 3D gridshell of real human face without any support materials in a time saving way. Meanwhile, the research also uses deep learning to automate the inverse design process of the material in 2D grid. The main idea of this study is to utilize deep learning for the inverse design of 4D simulated human face mask. Inverse design is about inferring the material structure that composes the target shape by analyzing the object's target shape. Our previous study has successfully inverse designed the material in 2D grid of target human face shapes using fully convolutional network (FCN). However, there is significant disparity between training dataset and human faces in reality and without the interface to read photos taken by cameras as the testing set. As a result, the model in before research could not be used to predict real-life photos, and experiments showed that the previous model was highly inaccurate in inverse designing human faces. Therefore, this study uses multiple computer vision processes to use camera-taken photos as the model's testing set. Meanwhile, by referring to facial measurement literature and public face databases, the material is adjusted to make the masks in training set more similar to real human faces. Lastly, the study uses finite-element method (FEM) to generate 30000 corresponding deformation shapes as the dataset for the deep learning model, which is then used to train the fully convolutional network. The database in the research uses Dlib model to filter out the masks which cannot be recognized as human faces, and uses face detection model to count the proportion of face features. According to the statistics, the proportion of face features in this database is close to that of the public face database and the real human faces. In the testing cases analysis, the trained FCN model can inverse design 2D grids with over 0.98 mean IOU and 0.99 pixel accuracy. By the calculation of image similarity, the SSIM of 3D gridshells deformed from FCN-output designs and target 3D gridshells is 0.95 in average, and L2 norm loss is 4.2 in average. For real human photos taken by the camera and AI-generated human face photos, the SSIM of 3D gridshells deformed from FCN-output designs and target 3D gridshells is 0.8 and the L2 norm loss is 10.6 in average. Compared to the previous study's inverse design of human faces, which had a SSIM of 0.59 and a L2 norm loss of 18.9 in average, as well as the matchshape value of OpenCV calculated for contours, this study's model significantly improved the accuracy of real human face. The also uses CloudCompare software to compare the results of the actual experiment with the simulated deformation of the mask, and find that there is only a few millimeters difference between the deformed and simulated surfaces.In addition, after trying various film coating methods, the study ultimately chose to use Teflon tape on the mask surface and colored it with acrylic paint, completing the production of the human face mask.

參考文獻


[1] Z. Ding, O. Weeger, H. J. Qi, and M. L. Dunn, "4D Rods: 3D Structures Via Programmable 1D Composite Rods," Materials & Design, vol. 137, pp. 256-265, 2018, doi: 10.1016/j.matdes.2017.10.004.
[2] G. P. Choi, L. H. Dudte, and L. Mahadevan, "Programming Shape Using Kirigami Tessellations," Nature Materials, vol. 18, no. 9, pp. 999-1004, 2019, doi: 10.1038/s41563-019-0452-y.
[3] X. Ning et al., "Assembly of Advanced Materials into 3D Functional Structures by Methods Inspired by Origami and Kirigami: A Review," Advanced Materials Interfaces, vol. 5, no. 13, p. 1800284, 2018, doi: 10.1002/admi.201800284.
[4] M. Meloni et al., "Engineering Origami: A Comprehensive Review of Recent Applications, Design Methods, and Tools," Advanced Science, vol. 8, no. 13, p. 2000636, 2021, doi: 10.1002/advs.202000636.
[5] A. Subash and B. Kandasubramanian, "4D Printing of Shape Memory Polymers," European Polymer Journal, vol. 134, p. 109771, 2020, doi: 10.1016/j.eurpolymj.2020.109771.

延伸閱讀