透過您的圖書館登入
IP:18.220.160.216
  • 期刊

利用柴比雪夫多項式數值法應用於函數擬合

Using Chebyshev Polynomial Numerical Method for Function Approximation

摘要


正交神經網絡是基於正交函數的特性開發的神經網絡,它可避免傳統的前饋神經網絡的缺點,但如要獲得較小的訓練誤差,則需要更多的處理元件及大量的數據來訓練正交神經網絡。在本研究中,利用最小平方法處理有限的數據集,來確定各權重。使用拉格朗日插值法,找出能獲得確切權值所需的數據組,在連續和離散的典型函數擬合模擬中,並選擇切比雪夫多項式為正交神經網絡的處理單元。經由實驗結果顯示,在確定權重的數值方法與已知訓練方法在性能上具有近似的誤差,但有更少的收斂處理時間。

並列摘要


Orthogonal neural network is a neural network based on the properties of orthogonal functions. It needs much more processing elements if a small training error is desired. Therefore, numerous data sets are required to train the orthogonal neural network. In the paper, a least square method is proposed to determine the exact weights by applying limited data sets. By using Lagrange interpolation method, the desired data sets required to solve for the exact weights can be calculated. An experiment in approximating typical continuous and discrete functions is given. The Chebyshev polynomial is chosen to generate the processing elements of the orthogonal neural network. The experimental results show that the numerical method in determining the weights has as well performance in approximation error as the known training method and the former has less convergence time.

參考文獻


DasGupta, B., & Schnitger, G. (1992). Efficient approximation with neural networks: A comparison of gate functions. Technical Report, The Pennsylvania State University.
Elliott, D. L. (1993). A better activation function for artificial neural networks. Technical Report, Institute for Systems Research, University of Maryland.
Chen, M. S., & Manry, M. T. (1993). Conventional modeling of the multilayer perceptron using polynomial basis functions. IEEE Transactions on Neural Networks,4(1), 164-166
Rohani, K., Chen, M. S., & Manry, M. T. (1992). Neural subnet design by direct polynomial mapping. IEEE Transactions on Neural Networks, 3(6), 1024-1026
Ivakhnenko, A. G. (1971). Polynomial theorem of complex systems. IEEE Transactions on Systems, Man, and Cybernetics, SMC-1(4), 364-378

延伸閱讀