透過您的圖書館登入
IP:3.145.64.126
  • 學位論文

基於 ReRAM 的深度學習加速器之可靠模擬框架

DL-RSIM: A Simulation Framework to Enable Reliable ReRAM-based Accelerators for Deep Learning

指導教授 : 楊佳玲

摘要


電阻式記憶體相關深度學習加速器可以提供一個最佳解使類神經網 路系統的能源效率大幅提升,但同時電阻式記憶體的電子特性以及它 的結構使其對於誤差相當敏感。為了建立穩定且可信賴的電阻式記憶 體加速器,我們需要一個模擬系統來精準探討非理想電路以及裝置特 性對準確率的影響。 在這篇論文中,我們提出一個可調整模擬框架 DL-RSIM 來處理這 個問題,DL-RSIM 可以根據使用者所定義的硬體配置,模擬深度學習 經過電阻式記憶體相關加速器可能產生的誤差結果。DL-RSIM 可以探 索諸多影響因子且可以與任一基於 Tensorflow 之深度學習模型結合。 我們將三種最具代表性的卷積類神經網路系統當作討論對象,充分證 實 DL-RSIM 可以提供晶片設計者選擇最適合的設計選項以及提供計 算機系統架構設計最佳化的技術。

並列摘要


Memristor-based deep learning accelerators provide a promising solution to improve the energy efficiency of neuromorphic computing systems. How- ever, the electrical properties and crossbar structure of memristors make them sensitive to errors. To enable reliable memristor-based accelerators, a simu- lation platform is needed to precisely study the impact of non-ideal circuit and device properties on the inference accuracy. In this paper, we propose a flexible simulation framework, DL-RSIM, to tackle this challenge. DL- RSIM simulates the error rates of every sum-of-products computation in the memristor-based accelerator according to user-defined hardware configura- tions and injects the errors in the targeted TensorFlow-based nerural network model. A rich set of reliability impact factors are explored by DL-RSIM, and it can be incorporated with any deep learning neural network implemented by TensorFlow. Using three representative convolutional neural networks as case studies, we show that DL-RSIM can guide chip designers to choose a reliability-friendly design option and can help computer architects to design reliability optimization techniques.

並列關鍵字

Deep learning ReRAM acclerator

參考文獻


[1] P. Y. Chen et al. Neurosim: A circuit-level macro model for benchmarking neuro- inspired architectures in online learning. IEEE TCAD, pages 1–1, 2018.
[2] W. H. Chen et al. A 65nm 1mb nonvolatile computing-in-memory reram macro with sub-16ns multiply-and-accumulate for binary dnn ai edge processors. In ISSCC, pages 494–496, 2018.
[3] P. Chi et al. Prime: A novel processing-in-memory architecture for neural network computation in reram-based main memory. In ISCA, pages 27–39, 2016.
[4] H. Esmaeilzadeh et al. Neural acceleration for general-purpose approximate pro- grams. In MICRO, pages 449–460, 2012.
[5] B.Feinbergetal.Makingmemristiveneuralnetworkacceleratorsreliable.InHPCA, pages 52–65, 2018.

延伸閱讀