透過您的圖書館登入
IP:18.221.187.121
  • 學位論文

高效率倒傳遞類神經網路的平行學習架構設計

High Efficient Back Propagation Neural Network with Parallel Learning Architecture Design

指導教授 : 蔡孟伸
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


類神經網路是一個平行且相互連接的神經元進行非線性運算的一個分散式網路架構。由於以范紐曼架構為基礎的處理器,在軟體運算類神經網路相當耗時。因此本論文採用FPGA方式開發硬體類神經網路,充份利用邏輯電路的平行處理與高速度的特性,完成類神經網路學習及回想的運算過程。本論文採用串列環形硬體架構為基礎,進行倒傳遞類神經前向運算,並且使用硬體設計方式實現片段線性活化函數取代先前活化函數使用查表法的設計方式。在倒傳遞類神經逆向運算中的輸出層誤差運算利用管線設計方式提高速度,在隱藏層誤差運算則採用串列環形硬體架構。硬體設計的主要關鍵在於如何讓成本最低與精準度最高的條件下,在單一時脈可以完成最多的計算。因此本文系統在速度提升上採用每一個處理單元內放置活化函數,並且使用平行運算的設計方式將前向運算及逆向運算整合在一起;在降低成本考量上,採用簡化活化函數硬體架構及倒傳遞網路分段計算,並且本系統在不同類神經網路上不需要重新規劃或是重新設計系統,增加了硬體的移植性及擴充性。並透過蝴蝶花分類問題、曲線擬合、預測問題等實驗,驗證本論文所提的架構可達到高效率及精準度的需求。

並列摘要


The main characteristic of Artificial Neural Networks (ANN) is highly parallel arithmetic operations. Neurons are inter-connected and perform non-linear operations. This thesis implements the Artificial Neural Network (ANN) in hardware using FPGA. The proposed architecture performs parallel forward and backward calculation during learning process in Back Propogation Artifical Nerual Networks. Activation function based on Piece Wise Linear function (PWL) is implemented in hardware. Backward calculation of output layer error is implemented in pipeline structure which improves efficiency. The backward calculation of hidden layer error is implemented in Torodial structure which is also used during forward calculation. Embedding activation function into all of process element (PE) and combining forward and backward calculation by parallel operation, the performance of the proposed structure can be improved and is ii twice as faster than pervious design. In order to reduce the cost, the proposed Artificial Neural Network (ANN) structure can be altered dynamically during calculation. The Advantages of proposed architecture is highly expandable and highly portable. Finally, three cases, e.g. Iris classification, curve fitting and predict problem were used to evaluate the performance and valid the results. The results show that the performance learning process of the proposal Artificial Neural Network structure is as expected.

參考文獻


[1] S. R. Jones, K. M. Sammut and J. Hunter, “Learning in Linear Systolic Neural Network Engines: Analysis and Implement,” IEEE Transactions on Neural Networks, vol. 5, no. 4, Jul. 1994, pp. 584-593.
[2] S. Jones, “Learning in Systolic Neural Network Engines,” Proceeding of the 26th Hawaii International on System Sciences, Hawaii, Jan. 1993, pp. 161-168.
[3] S. Mahapatra and R. N. Mahapatra, “Mapping of Neural Network Models onto Systolic Arrays,” Journal of Parallel and Distributed Computing, vol. 60, no. 6, Jan. 2000, pp.677-689.
[4] Y. Kim, M. J. Noh, T. D. Han and S. D. Kim, “Mapping of Neural Networks onto the memory processor integrated architecture,” Neural Networks, vol. 11, no. 6, Aug. 1998, pp. 1083-1098.
[5] R. M. Perez, P. Bachiller, P. Martinez and P. L. Aguilar, “Neural Network Quantifier for Solving the Mixture Problem and its Implementation by Systolic Arrays,” Microelectronics Journal, vol. 30, no. 1, Sept. 1998, pp. 77-82.

被引用紀錄


葉彥智(2010)。具彈性架構的高速硬體倒傳遞及回饋型類神經網路設計〔碩士論文,國立臺北科技大學〕。華藝線上圖書館。https://doi.org/10.6841/NTUT.2010.00465
詹雅宇(2011)。具自由回饋節點的高速硬體倒傳遞及回饋型類神經網路設計〔碩士論文,國立臺北科技大學〕。華藝線上圖書館。https://www.airitilibrary.com/Article/Detail?DocID=U0006-1808201112271000

延伸閱讀