透過您的圖書館登入
IP:18.217.116.183
  • 學位論文

基於遞迴式奇異值分解之線上模糊極限學習機

Online Fuzzy Extreme Learning Machine Based on Recursive Singular Value Decomposition

指導教授 : 歐陽振森

摘要


本研究提出一種基於遞迴式奇異值分解之線上模糊極限學習機,用以改良原始模糊極限學習機,使之適用於解決分類或迴歸建模中之線上學習問題。與原始模糊極限學習機相同,本方法中隱藏層模糊歸屬函數之相關權重乃是透過隨機給值的方式來設定。然而,本方法使用遞迴式奇異值分解取代原先摩爾彭洛斯廣義逆矩陣,用以針對逐筆輸入資料求出當時之最佳輸出層權重,因此適用於線上學習。實驗結果顯示,相較於原始模糊極限學習機,本方法可進行線上學習,並可達到一致的建模準確率。此外,本方法較他人之線上循序學習演算法更具有較佳之建模準確率與穩定性。

並列摘要


In this study, we propose an online fuzzy extreme learning machine based on the recursive singular value decomposition for improving the fuzzy extreme learning machine, and therefore making it applicable for solving online learning problems in classification or regression modeling. Like the original fuzzy extreme learning machine, our approach randomly assigns values to weights of fuzzy membership functions in the hidden layer. However, the Moore-Penrose pseudoinverse is replaced with the recursive singular value decomposition for calculating the optimal weights corresponding to the output layer. Compared with the original fuzzy extreme learning machine, our approach is applicable for the online learning of classification or regression modeling and produces the same modeling accuracy. Moreover, our approach possesses the better modeling accuracy and stability than the other approach, namely, online sequential learning algorithm.

參考文獻


[1]G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew, “Extreme learning machine: a new learning scheme of feedforward neural networks,” 2004 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 985-990, 2004.
[2]S. Tamura and M. Tateishi, “Capabilities of a four-layered feedforward neural network: four layers versus three,” IEEE Transactions on Neural Networks, vol. 8, no. 2, pp. 251-255, 1997.
[3]G.-B. Huang, “Learning capability and storage capacity of two-hidden-layer feedforward networks,” IEEE Transactions on Neural Networks, vol. 14, no. 2, pp. 274-281, 2003.
[4]G.-B. Huang, “Real-time learning capability of neural networks,” IEEE Journals & Magazines, vol.17, no. 4, pp. 863-878, 2006.
[5]P. L. Bartlett, “The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network,” IEEE Transactions on Information Theory, vol. 44, no. 2, pp. 525-536, Mar. 1998.

延伸閱讀