近年來,深度學習被廣泛應用於各領域,並皆取得不錯的成績。在深度學習中,如何架構模型是極為重要的事情,然而,設計一個優秀的神經網路架構,不僅需要仰賴豐富的專業知識及反覆的試驗,還需要在目標任務的領域下,擁有足夠的經驗。因此,為了實現網路架構的自動化設計,神經網路架構搜索(Neural architecture search, NAS)成為人們關注的研究議題,然而大多數的研究對於硬體設備有極高的要求限制。因此,本論文以實際應用的角度切入,提出以卷積單元與池化單元的組成形式,進行可微分的架構搜索,同時透過降低搜索過程中的參數量及運算量,以符合預算有限的硬體設備環境。實驗結果顯示,在特定十類的Imagenet資料集中,進行圖像分類,NAS搜索得到的神經網路架構,其辨識率不僅比基準網路(Baseline)Alexnet,高了6%的準確度,同時,也省去人工手動設計神經網路架構的精力與時間。
Recently, deep learning has been widely used in various fields and achieved outstanding results. In deep learning, how to build a network structure is extremely important. However, designing an excellent architecture requires not only a sufficient of professional knowledge and repeated experiments, but also a wealth of experience in the field of the target task. Therefore, in order to achieve the automatic design of neural network architecture, neural architecture search (NAS) has become a popular research topic, but most of the research has extremely high requirements for hardware equipment. In terms of the perspective of practical applications, we propose search space is formed by convolution cells and pooling cells to do neural architecture search. At the same time, by reducing the amount of parameters and computation in the search process to meet the limitation of hardware equipment. The experimental results show that, the image recognition rate of neural network architecture searched by NAS is 6% higher than the baseline Alexnet in the specific ten classes of Imagenet dataset and also makes the entire process more automated and saves a lots of effort and time.