透過您的圖書館登入
IP:18.222.184.0
  • 學位論文

隨機運算之位元壓縮及其在卷積神經網路中之應用與實作

Bit-stream Compression of Stochastic Computing and its Implementation on Convolution Deep Neural Network

指導教授 : 江介宏
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


類神經網路在隨著機器學習的發展被廣泛討論,相較於在電腦平台上運算,硬體實作是一個相對低功耗且高效率的方案。在硬體實作上,除了傳統的定點數運算,隨機運算也是一種常被採用的實作方式,基於隨機運算之特性,我們可以用較長的運算時間換取相對於定點數運算更低的面積與功耗,因此隨機運算也被視為在資源有限的裝置上的一種解決方案。 在本論文中,我們採用隨機運算作為類神經網路的主要實作方式,並提出位元壓縮的技巧以減少隨機運算所帶來的高延遲時間。此外,我們將基於經典的摺積類神經網路(CNN)架構LeNet-5探討採用隨機運算之CNN於現場可程式化邏輯閘陣列(FPGA)之設計方法,以及探討將隨機運算方式套用位元壓縮技巧後所帶來之效益與其權衡。

並列摘要


Stochastic computing (SC) is an unconventional arithmetic method, which represents values by random bit-streams and uses bit-wise operations for computing. Based on these features, SC has many advantages in hardware implementation such as simple function, high area efficiency, low power and high error tolerance. Thus, SC is also considered as a solution for resource-limited or portable devices. In this work, we proposed a new encoding method for SC called "Bit-stream compression" to improve the computing latency, and introduced corresponding operations for the new encoding. With bit-stream compression, we get 3x speed than the original SC method. We also proposed a design methodology to apply our SC encoding to convolutional neural network and implemented into hardware design, and finally verified our design on FPGA.

參考文獻


[1] B. D. Brown and H. C. Card. Stochastic neural computation. i. computational elements. IEEE Transactions on computers, 50(9):891–905, 2001.
[2] B. D. Brown and H. C. Card. Stochastic neural computation. ii. soft competitive learning. IEEE Transactions on Computers, 50(9):906–920, 2001.
[3] Fran‘gois Chollet et al. Keras. Retrieved from: https://github.com/fchollet/keras.
[4] B. R. Gaines. Stochastic computing. In Proceedings of the April 18-20, 1967, spring joint computer conference, pages 149–156, 1967.
[5] B. R. Gaines. Stochastic computing systems. In Advances in information systems science, pages 37–172. Springer, 1969.

延伸閱讀