透過您的圖書館登入
IP:3.135.201.186
  • 學位論文

脈衝神經網路晶片之錯誤模型以及測試

Fault Modeling and Testing of Spiking Neural Network Chips

指導教授 : 李建模

摘要


現行有許多人工智慧與物聯網結合的例子。然而,機器學習需要大量運算,而物聯網裝置通常又會有功耗的限制。因此,如何讓物聯網裝置本身就能夠處理機器學習是一個很有挑戰的項目。脈衝神經網路使用脈衝訊號作為傳輸資訊的方式,而且脈衝神經網路在硬體上的實作會比現行主流的神經網路在硬體上實作要來得省電。然而,也因為脈衝神經網路本身隨機特性以及容錯能力,針對脈衝神經網路晶片的測試難度非常高。在本篇論文中,我們提出七個有關脈衝神經網路的錯誤模型,這些模型都是基於脈衝神經網路中的神經元以及突觸的運作方式做為發想。另外我們也提出專門測試脈衝神經網路晶片的測試流程。這個測試流程會把晶片的輸出當作一個分布來處理,而不像傳統測試方法當作一組特定的值。實驗結果說明雖然神經網路本身具有容錯能力,還是有兩個錯誤模型對脈衝神經網路晶片有很大的影響。針對手寫數字辨識用途的晶片,在錯誤模擬的實驗中,通過我們的測試流程的晶片具有88.90%的準確率。在考慮隨機因素的情況下,這樣的準確率與正常的晶片相同。

並列摘要


Nowadays, there are many IoTs integrated with AI. However, machine learning needs intensive computation, which leads to high power consumption. It is a challenge to perform machine learning on IoT devices locally. Spiking neural network (SNN) is a very promising low power neural network that can be implemented in asynchronous circuits. However, it is hard to test SNN chips since it is inherently probabilistic and fault tolerant. So far, there is no good fault model and test method suitable for SNN chips. In this work, we propose seven behavior fault models for SNN based on the function of neurons and synapses. We also propose a test method, which considers the output response as a distribution rather than specific values. The experiment results on a MNIST dataset show that although SNN is fault tolerant, two fault models are still critical for SNN chips. Given a specific application, the accuracy of chips that passed our test is 88.90%, which is indistinguishable from that of good chips, considering the effects of random seeds.

參考文獻


[Altman 13] Douglas Altman, et al., eds. Statistics with confidence: confidence intervals and statistical guidelines. John Wiley Sons, 2013.
[Bi 98] Guo-qiang Bi, and Mu-ming Poo. "Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type." Journal of neuroscience 18.24 (1998): 10464-10472.
[Byrne 13] Michael D. Byrne, "How many times should a stochastic model be run? An approach based on confidence intervals." Proceedings of the 12th International conference on cognitive modeling, Ottawa. 2013.
[Cao 15] Yongqiang Cao, Yang Chen, and Deepak Khosla. “Spiking deep convolutional neural networks for energy-efficient object recognition.” International Journal of Computer (2015): 54-66.
[Chapiro 84] Daniel M. Chapiro. Globally-Asynchronous Locally-Synchronous Systems. No. STAN-CS-84-1026. Stanford Univ CA Dept of Computer Science, 1984.

延伸閱讀