透過您的圖書館登入
IP:13.58.77.98
  • 學位論文

基於向量量化之卷積神經網路處理器架構設計

Convolutional Neural Network Accelerator with Vector Quantization

指導教授 : 簡韶逸

摘要


深度神經網路在許多邊緣端的電腦視覺任務上已經展現了令人印象深刻的表現,使得在手機上或是物聯網裝置上的深度神經網路加速器需求越來越多。然而,巨量的能量消耗與儲存量的需求使得硬體設計越來越困難。因此,在這篇論文中,我們提出了一個基於壓縮技術(向量量化)來同時減少的神經網路模型的大小與計算量的神經網路加速器。此外,我們設計了一種特化的處理單元與資料流,前者有不同的靜態隨機存取記憶體配置,後者則是可以使加速器支援不同的卷積濾波器的大小,並在輸入或輸出的維度極小時亦保持高度的使用率。與現今最佳的神經網路加速器相比,我們提出的加速器可以減少3.94倍的動態隨機存取記憶體存取量以及在單批次的神經網路推論下減少1.2倍的時間。

並列摘要


Deep neural networks (DNNs) have demonstrated impressive performance in many edge computer vision tasks, causing the increasing demand for DNN accelerator on mobile and internet of things (IoT) devices. However, the massive power consumption and storage requirement make the hardware design challenging. In this paper, we introduce a DNN accelerator based on a model compression technique vector quantization (VQ), which can reduce the network model size and computation cost simultaneously. Moreover, a specialized processing element (PE) is designed with various SRAM bank configurations as well as dataflows such that it can support different codebook/kernel sizes, and keep high utilization under small input or output channel numbers. Compared to the state-of-the-art, the proposed accelerator architecture achieves 3.94 times reduction in memory access and 1.2 times in latency for batch-one inference.

參考文獻


S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deepneural networks with pruning, trained quantization and huffman coding,”arXiv preprint arXiv:1510.00149, 2015.
A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen, “Incremental network quan-tization: Towards lossless cnns with low-precision weights,”arXiv preprintarXiv:1702.03044, 2017.
Y.-H. Chen, T. Krishna, J. S. Emer, and V. Sze, “Eyeriss: An energy-efficientreconfigurable accelerator for deep convolutional neural networks,”IEEEJournal of Solid-State Circuits, vol. 52, no. 1, pp. 127–138, 2017.
S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally,“Eie: efficient inference engine on compressed deep neural network,” inComputer Architecture (ISCA), 2016 ACM/IEEE 43rd Annual InternationalSymposium on. IEEE, 2016, pp. 243–254.
A. Ardakani, C. Condo, and W. J. Gross, “Sparsely-connected neural net-works: towards efficient vlsi implementation of deep neural networks,”arXivpreprint arXiv:1611.01427, 2016.

延伸閱讀