類神經網路在隨著機器學習的發展被廣泛討論,相較於在電腦平台上運算,硬體實作是一個相對低功耗且高效率的方案。在硬體實作上,除了傳統的定點數運算,隨機運算也是一種常被採用的實作方式,基於隨機運算之特性,我們可以用較長的運算時間換取相對於定點數運算更低的面積與功耗,因此隨機運算也被視為在資源有限的裝置上的一種解決方案。 在本論文中,我們採用隨機運算作為類神經網路的主要實作方式,並提出位元壓縮的技巧以減少隨機運算所帶來的高延遲時間。此外,我們將基於經典的摺積類神經網路(CNN)架構LeNet-5探討採用隨機運算之CNN於現場可程式化邏輯閘陣列(FPGA)之設計方法,以及探討將隨機運算方式套用位元壓縮技巧後所帶來之效益與其權衡。
Stochastic computing (SC) is an unconventional arithmetic method, which represents values by random bit-streams and uses bit-wise operations for computing. Based on these features, SC has many advantages in hardware implementation such as simple function, high area efficiency, low power and high error tolerance. Thus, SC is also considered as a solution for resource-limited or portable devices. In this work, we proposed a new encoding method for SC called "Bit-stream compression" to improve the computing latency, and introduced corresponding operations for the new encoding. With bit-stream compression, we get 3x speed than the original SC method. We also proposed a design methodology to apply our SC encoding to convolutional neural network and implemented into hardware design, and finally verified our design on FPGA.