室內設計產業在現今來說是不可或缺的一塊產業,許多時候客戶在裝潢住宅前會想預先看到自己房子內的真實樣貌,此時室內設計業就會提供擬真圖提供給客戶作為參考,擬真圖是利用建模軟體所繪製出來的。而建模的步驟為先設計出一個平面,接著拉高成為一個素模,再將素模繪製出有材質、有顏色、有燈光的3D模型,最後再利用渲染軟體將3D模型渲染為3D擬真圖。但從素模到3D擬真圖的繪製過程中會花費大量的時間以及視覺化思考。隨著時間的發展,人工智慧的技術也已經非常成熟,若擬真圖的設計可以利用人工智慧來製作將會省下許多時間以及視覺化思考等技術需求。本論文中使用生成對抗網路為基礎來學習室內設計的建模,將素模直接轉換為3D擬真圖,我們先利用SketchUp製圖蒐集大量的室內圖片,並使用Pix2pix以及CycleGAN對圖片進行訓練,視情況調整不同的訓練次數,從中分析出最佳結果接著與V-Ray渲染出來的圖片進行比較。實驗結果顯示,Pix2pix對於素模與擬真圖的轉換有較不錯的表現,相比之下CycleGAN並不適合對素模與擬真圖進行轉換。最後生成出來的最佳結果與V-Ray渲染出來的擬真圖進行比較後能夠發現,生成對抗網路模型產生的圖片的確能自動設計好房間,生成速度也非常快,整體的架構也非常清楚,但相比之下利用V-Ray渲染出來的圖片,生成對抗網路產生的圖片細節並沒有那麼明顯。最後我們希望透過本論文的實驗,能夠快速地將素模轉換為擬真圖,解決時間、技術、以及金錢問題,讓沒有設計能力的人也可以獲得所需要的圖片。
The interior design industry is an indispensable part of the industry today. Many times, customers want to see the real appearance of their houses before decorating their houses. At this time, the interior design industry will provide realistic pictures for customers as a For reference, the immersive map is drawn using modeling software. The modeling steps are to first design a plane, then pull it up to become a prime model, and then draw a 3D model with materials, colors and lights from the prime model, and finally use the rendering software to render the 3D model into a 3D simulation. real picture. However, it takes a lot of time and visual thinking in the process of drawing from the original model to the 3D realistic drawing. With the development of time, the technology of artificial intelligence has also become very mature. If the design of the simulation map can be made by artificial intelligence, it will save a lot of time and technical requirements such as visual thinking. In this paper, we use the generative adversarial network as the basis to learn the modeling of interior design, and directly convert the original model into a 3D realistic image. We first use SketchUp to collect a large number of indoor images, and use Pix2pix and CycleGAN to train the images. Adjust different training times according to the situation, analyze the best results and compare them with the pictures rendered by V-Ray. The experimental results show that Pix2pix has a good performance for the conversion of pixel simulation and simulation images. In contrast, CycleGAN is not suitable for the conversion of pixel simulation and simulation images. After comparing the best result generated with the realistic image rendered by V-Ray, it can be found that the image generated by the generative adversarial network model can indeed automatically design the room, the generation speed is also very fast, and the overall architecture is also very clear , but in contrast to the pictures rendered by V-Ray, the details of the pictures generated by the generative adversarial network are not so obvious. Finally, we hope that through the experiments of this paper, we can quickly convert the original model into a realistic image, solve the problems of time, technology, and money, so that people who have no design ability can also obtain the required images.