透過您的圖書館登入
IP:18.216.124.8
  • 學位論文

支持向量機為基底之最小修剪方差柴比雪夫類神經網路及其應用

SVR Based LTS CPBUM Neural Network and Applications

指導教授 : 鄭錦聰
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


在現今的日常生活中,有許許多多的事物都與人工智慧有著或多或少的關係,人工智慧的發展及需求也是日益的增加,在機器學習、類神經網路、支稱向量機等研究領域也常應用在生活中的事物裡,一般的機器學習方法可分為分類和回歸分析的功能,分類是經學習後,找出資料中有著相同特性來歸類,回歸分析是於資料集中找出特性關係,但最終的目的是希望從資料集中特性關係以函數曲線來呈現,藉由函數曲線來建立數學模型,用此模型即可預測其它有著相近特性關係之資料集。 柴比雪夫多項式為基底的統合模型是一個可以處理雜訊的類神經網路模型,在雜訊不高的情況下,所學習之結果的與理想值有跟相近的結果且有快速之收斂速度,但當資料集中有著較高雜訊時,會受到雜訊的影響而使得學習效果變得與理想值有著明顯的落差。因此本論文是以柴比雪夫多項式為基底的統合模型結合最小修剪方差與支持向量回歸發展一個可有效處理處理雜訊與離異點的類神經網路模型,論文中先結合最小修剪方差及支持向量回歸可先剪裁較明顯之雜訊值,使其訓練集不受過離異點之影響,得以學習出資料的特性。 論文中先以原有觀念,結合以最小平方法為初始值做最小修剪方差資料集的剪裁,發現學習是與初始值的曲線有著極大的關係,其次改用柴比雪夫多項式為基底的統合模型少量遞迴數做最小修剪方差初始值,在做雜訊小的資料集學習時,在修剪資料少時與未修剪資料回歸效果沒有太大的落差,學習結果非常相似。但在修剪資料多時,則未剪裁資料集回歸來的佳,且學習資料的好壞與遞迴次數的多寡相互成反比。因此為解決初始值問題,本論文提出最小修剪方差結合支持向量回歸做資料集的初始學習,於雜訊小的資料集中,以支持向量回歸做最小修剪方差回歸效果與未剪裁資料集回歸相近,但在於雜訊大的資料集中,能將明顯較大的雜訊優先修剪,使得回歸資料較未修剪時來的佳。並把所提出來的方法應用在Sinc函數、動態函數、Hermite函數及混沌系統。總結本論文所提出的柴比雪夫多項式為基底的統合模型的支持向量回歸做最小修剪方差,在於處理低雜訊資料集時,與柴比雪夫多項式為基底類神經網路有相近的學習成果,對於於高雜訊資料集中,有著優於未剪裁資料集學習的成果。

並列摘要


In everyday life, there are many things related with artificial intelligence, demand is increasing slowly, in machine learning, neural networks, Support Vector Machine (SVR) and so often used in daily life. In general, the machine learning methods have two major capability are classification and regression analysis. Classification is by learning to identify characteristics of data sets. Regression analysis is to identify characteristics of data sets used to predict the characteristics of other data sets. The Chebyshev Polynomials Based Unified Model (CPBUM) is a neural network that handles data noises. In the case of low noises, the learning results are close to the real value and it converges very quick. When data sets with outliers, the learning effect is affected by the outliers very much. Therefore, the prediction results are not ideal. In this paper, SVR-based LTS CPBUM is proposed to develop a neural network model that handles noises and outliers. Besides, LTS-SVR trims the obvious noises firstly. As a result, the outliers have less influence on the training samples. After using LTS to reduce the number of noises, it is found that the initial value influences the learning results greatly. In another hand, it has less difference between higher h value and untrimmed training data when LTS-CPBUM-5 regress the training set with small noise. The parameter h value is proportional to the learning results. To solve the initial value problem, SVR-based LTS is proposed to initialize the learning process. This method obtains a better result in the experiments. Four cases: Sinc function, dynamic system, Hermite function, and Mackey system are used for the simulations. To sum up, SVR-based-LTS-CPBUM have better prediction result than the untrimmed data with noises and outliers.

並列關鍵字

SVR LTS CPBUM

參考文獻


[2] C. C. Chuang, J. T. Jeng and C. W. Tao, “Hybrid robust approach for TSK fuzzy modeling with outliers,” Expert Systems with Applications, 36 (5), pp. 8925-8931, 2009.
[3] C. C. Chuang, J. T. Jeng and P. T. Lin, “Annealing robust radial basis function networksfor function approximation with outliers,” Neurocomputing, 56, pp. 123 – 139, 2004.
[4] Y. Y. Fu, C. J. Wu and J. T. Jeng and C. N. Ko, “ARFNNs with SVR for prediction of chaotic time series with outliers,” Expert Systems with Applications, 37, pp. 4441–4451, 2010.
[5] J. T. Jeng and T. T. Lee, “The Chebyshev Polynomials Based (CPB) Unified Model Neural Network for the Worst-case Identification of Nonlinear Systems H-infinite Problem,” Proceeding of the SPIE – The International Society for Optical Engineering, Vol. 3390, pp. 520- 530, 1998.
[6] C. C. Chuang and J. T. Jeng, “CPBUM Neural Networks for Modeling with Outliers and Noise,” Applied Soft Computing, 2006.

延伸閱讀