透過您的圖書館登入
IP:3.145.64.241
  • 學位論文

融合二維與三維卷積類神經網路技術之汽車零件的自動偵測與分類

Automatic Detection and Classification of Car Parts Combining 2D and 3D Convolutional Neural Network Technology

指導教授 : 林春宏 黃馨逸
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


現有許多的汽車模擬系統,其系統由許多汽車零件所組成。汽車零件已經透過專家以人工方式設計成零件骨架(framework)的三角形網格(triangular grid),而且也在網格的構面上敷貼了紋理貼圖(texture image),以完成三維汽車的模型。但是此系統若要對汽車零件的材質屬性,做進一步在物理性質上的應用,將會受到其限制或必須重新建置。本研究透過自動化方式,將現有的汽車模型構面與貼圖資料進行結合,再建置一套多模組的深度學習之網路模型架構,以建置一套三角形網格構面的汽車零件材質之智慧型自動識別系統,以解決模擬系統現有的問題。 本研究從現有三維汽車模型的資料庫中,對三維汽車模型以融合二維與三維卷積類神經網路(convolution neural networks, CNN)技術,進行汽車零件(parts)的自動偵測與分類。首先將二維的紋理貼圖(texture image)進行零件的偵測(detection)與切割(segmentation),然後再對切割後的零件做分類,以識別出確定的零件名稱,並給予其標識編號。本文以傳統影像處理技術,結合多種處理方法,考量紋理貼圖背景色與零件的影響,提出自動化的汽車零件切割技術(automated segmentation of car parts)。自動化的汽車零件切割技術,分為簡易的汽車零件切割處理流程,與細緻的汽車零件切割處理流程。二維汽車紋理貼圖的零件識別,係使用第三版YOLO(you only look once, YOLOv3)類神網路的模型,識別紋理貼圖的零件類別。最後,再根據三維汽車模型對應紋理貼圖的零件標識編號,進行三維汽車模型的零件材質識別。三維模型零件分類,係採用點雲神經網路(PointNet)模型的架構,以點雲的資料型態為點雲神經網路(PointNet)模型的架構的輸入資料,辨識構成三維模型每個點的類別標籤。 在自動化的汽車零件切割技術表現上,兩道流程在不同狀況的紋理貼圖,有不錯的結果。在二維汽車紋理貼圖的零件識別的實驗,使用多組自動生成汽車零件紋理貼圖進行訓練,以單類別的貼圖影像資料集,為零件類別辨識結果最佳的訓練集。三維模型零件分類的實驗,以大型資料集的遷移學習,辨識三維模型點雲類別有過6成的準確率。未來能將本研究提出的三維模型零件類別標記流程,實際應用在醫療技術、軍事訓練、航太科技、以及災害應變等領域的模擬系統中,辨識模擬系統內多種物件的零件類別。

並列摘要


Many car simulation systems are composed of many car parts. Car models have been manually designed by experts to create a triangular grid of the car models framework, and a texture image has been applied to the mesh structure to complete the three-dimensional model of the car. However, if this system is to apply the material properties of car parts to further physical properties, it will be restricted or must be rebuilt. In this study, the existing object model facets and texture data are combined in an automated way. And then set a deep learning network framework with a multi-module, to build with an automatic identification system of use triangular grid mesh in car parts. To solve the existing problems of the simulation system. In this study, from the database of existing 3D car models, the 3D car models are combined with 2D and 3D convolution neural networks (CNN) technology to automatically detect and classify car parts. First, the two-dimensional texture image is used for detection and segmentation of the parts, and then the cut parts are classified to identify the determined part names and give them identification numbers. This paper uses traditional image processing technology, combined with multiple processing methods, considering the influence of texture map background color and parts, and proposes an automated segmentation of car parts. Automated segmentation of car parts is divided into simple segmentation method of car parts and fine segmentation method of car parts. The part recognition of the two-dimensional car texture map uses the third version of the YOLO (you only look once, YOLOv3) model of the neural network to identify the part category of the texture map. Finally, according to the part identification number corresponding to the texture map of the three-dimensional car model, the material of the three-dimensional car model is identified. The classification of 3D model parts is based on the PointNet. The point cloud data type is used as the input data of the PointNet model to identify the components of each point category label constituting the 3D model. In terms of the performance of the automated segmentation of car parts, the two processes have good results in texture mapping under different conditions. In the two-dimensional car texture map part recognition experiment, multiple sets of automatically generated car parts texture maps are used for training, and a single-category texture image data set is the training set with the best part category recognition results.In the experiment of classification of 3D model parts, with the transfer learning of large data sets, the accuracy of identifying 3D model point cloud categories is over 60%.In the future, the 3D model part category marking process proposed in this research can be practically applied to simulation systems in the fields of medical technology, military training, aerospace technology, and disaster response to identify the part categories of various objects in the simulation system.

參考文獻


[1] Anwer, M., Khan, S., Weijer, J., Molinier, M., Laaksonen, J. (2018). Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification. ISPRS Journal of Photogrammetry and Remote Sensing, 138, pp.74-85.
[2] 李家宇(2011)。3D都市尺度雷射掃瞄在建築數位典藏之應用-以新竹縣北埔鄉、竹東鎮及大台北地區為例(碩士論文)。取自https://hdl.handle.net/11296/k8cuz3。搜尋日期:2020年3月31日。
[3] Wang, P., Liu, Y., Guo, Y.X., Sun C.Y., Tong, X. (2017). O-CNN: octree-based convolutional neural networks for 3D shape analysis. ACM Transactions on Graphics (SIGGRAPH), 36(4).
[4] Lin, H., Averkiou, M., Kalogerakis, E., Kovacs, B., Ranade, S., Kim, V.G., Chaudhuri, S., Bala, K. (2018). Learning Material-Aware Local Descriptors for 3D Shapes. 2018 International Conference on 3D Vision. arXiv preprint arXiv:1810.08729.
[5] Wu. Z., Song. S., Khosla. A., Yu. F., Zhang. L., Tang. X., Xiao. J. (2015). 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1912–1920

延伸閱讀