由於近幾年人工智慧(Artificial Intelligence, AI)的興起,尤其是深度學習(Deep Learning),它可以應用在許多領域,如電腦視覺、語音識別、自然語言處理等等。為了提高效率,AI加速器的需求也在逐漸增加。在神經網路(Neural Network, NN)中,激活函數(Activation Function, AF)可以在各節點輸入輸出之間生成非線性映射,讓神經網路的學習結果更好。在此篇論文中,我們利用了激活函數Hyperbolic Tangent特性來減少硬體成本,並利用激活函數Hyperbolic Tangent及Sigmoid之間的數學關係式,將多個激活函數整合成單一硬體架構,共享部分的模塊(Module)來減少硬體成本。另外,我們也提出新的近似乘法器設計,結合分段線性逼近法(Piecewise Linear Approximation, PWL),可以用更少的面積及功耗來實現任何的激活函數。
In recent year, Artificial Intelligence (AI) technology grows up dramatically. Especially deep learning technology can be applied to many fields, such as computer vision, speech recognition, natural language processing, and so on. In a neural network, the activation function defines the output of that node. In this thesis, we reduce hardware cost of Hyperbolic Tangent function by utilizing its mathematical feature. Moreover, we also propose a hardware design for the integration of both Hyperbolic Tangent function and Sigmoid function according to their mathematical relationship. We try to save the hardware cost by sharing some modules. Besides, we also propose a new approximate multiplier. By incorporating our proposed approximate multiplier with piecewise linear approximation, we can implement any activation function with smaller area and smaller power.