透過您的圖書館登入
IP:3.19.29.89
  • 學位論文

具彈性架構的高速硬體倒傳遞及回饋型類神經網路設計

Design of High Speed Hardware Based Back Propagation and Recurrent Neural Networks with Flexible Structure

指導教授 : 蔡孟伸

摘要


類神經網路係使用相互連結的神經元來模仿生物神經網路的特性,其本身為一平行運算的架構。在動態的即時性預測和控制的問題中,類神經網路的運算速度是一大關鍵。因此本論文採用FPGA方式開發硬體類神經網路,充份利用硬體電路的平行處理與高速的特性,完成類神經網路學習及回想的運算過程。本論文採用串列環形硬體架構為基礎,進行三層與四層倒傳遞類神經及回饋型類神經網路的運算,並且透過硬體設計方式實現片段線性活化函數計算。活化函數包含雙彎曲函數與雙曲線正切函數可供使用者自行選擇。在逆向運算中的輸出層誤差運算則利用管線方式設計以提高速度,在隱藏層誤差運算則採用串列環形硬體架構。利用分段計算設計,可使控制器以固定運算單元陣列下,合成出龐大的網路。本論文以Nios II作為下達指令、網路架構參數的控制器,透過控制器分配權重設計及控制流程,使其不需要重新規劃整個系統。由於是在硬體上完成開發,因此在移植性方面及運算速度上都比軟體上開發來的便利及快速,適用於低階嵌入式系統內。本論文透過應用在曲線擬合、電池放電曲線以及預測問題等實驗,驗證本論文所提的架構可達到高效率及精準度的需求。

並列摘要


Artificial Neural Networks (ANN) is a highly parallel computing system architecture. It utilizes many interconnecting neurons to simulate the properties of organic neural network. The processing speed of the neural network is the key issue in the dynamic real-time forecast and control. This thesis implements the Artificial Neural Network (ANN) in hardware using FPGA. The proposed architecture performs parallel forward and backward calculation during learning process for three or four-layer Back Propagation Artificial Neural Networks and Recurrent Neural Networks. Activation function based on Piece Wise Linear function (PWL) is implemented in hardware. The hardware-based activation function not only includes Sigmoid Function but also contains Hyperbolic Tangent Function. Users are able to choose from these two functions. Backward calculation of output layer error is implemented in pipeline structure which improves efficiency. The backward calculation of hidden layer error is implemented in Torodial structure which is also used during forward calculation in order to save the hardware resources. Segmentation calculation is also implemented in this thesis. By using segmentation calculation, a larger ANN structure can be formed by a piece of hardware with limited number of PEs. Since the number of neurons in the synthesized ANN can be dynamically changed, the proposed ANN structure can be easily applied to different applications, especially in the low-end embedded systems. Finally, three cases, e.g. curve fitting, battery dis-charge curve and predict problems were used to evaluate the performance and valid the results.

參考文獻


[3] 黃禎智,基於語者模型梅爾倒頻譜係數與類神經網路之主從式即時語者辨識系統,國立台北科技大學自動化科技研究所,2005,台北。
[1] D. Hammerstrom, “A VLSI Architecture for High-Performance, Low-Cost, On-Chip Learning,” Proceedings of International Joint Conference on Neural Networks, Jun. 1990, pp. 537-544.
[2] D. Hammerstrom, Digital VLSI for Neural Networks, MIT Press, USA, 1998, pp. 304-309.
[5] T. G. Clarkson, C. C. Christodoulou, Y. Guan, D. Gorse, D. A Romano-Critchley and J. G. Taylor, "Speaker identification for security systems using reinforcement-trained pRAM neural network architectures", IEEE Trans. Syst. Man Cyb., Vol. 31, No. 1, 2001, pp. 65-76.
[6] S. Y. Kung, and J. N. Hwang, “A Unifying Algorithm /Architecture for Artificial Neural Networks”, Proc. International Conference on Acoustics, Speed, and Signal ,Processing, Vol. 4 , 23~26 May 1989, pp. 2505-2508.

被引用紀錄


許毅冠(2014)。太陽光電系統發電量預測模型之實作〔碩士論文,國立臺北科技大學〕。華藝線上圖書館。https://doi.org/10.6841/NTUT.2014.00104
劉冠廷(2014)。應用超學習增進傳統掛袋法準確度之研究〔碩士論文,國立虎尾科技大學〕。華藝線上圖書館。https://doi.org/10.6827/NFU.2014.00042
詹雅宇(2011)。具自由回饋節點的高速硬體倒傳遞及回饋型類神經網路設計〔碩士論文,國立臺北科技大學〕。華藝線上圖書館。https://www.airitilibrary.com/Article/Detail?DocID=U0006-1808201112271000

延伸閱讀