透過您的圖書館登入
IP:3.145.42.94
  • 學位論文

著重鄰近區域之強健支向機迴歸

Robust Vicinity-Emphasis Support Vector Regression

指導教授 : 楊棧雲
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


標準支持向量機迴歸,是採用著名的?不敏感損失函數,其想法是ε區間的設定而產生免除區域,也就是只要沒超出ε區間就不會是支持向量,相對地,距離訓練樣本過遠的離群點,卻是支持向量對決策函數產生貢獻。就支持向量之稀疏性原理而言,最終只有支持向量會對決策函數產生影響。本研究首先確認ε不敏感區間其影響力免除與理想強健迴歸意義相左,同時也碓認了離群點影響力的去除可有效地增強迴歸函數的強健性。循理,本研究依上述兩點來修改既有的標準支持向量機,期能符合強健支向機迴歸。本研究提出一種新的損失函數,並基於所提出的損失函數,援引斜坡損失函數及凹凸過程等相關技術,重新建構支向機迴歸,建模、模擬以為驗證。實驗結果顯示,所提出的方法在與標準支持向量機迴歸比較,在誤差精度的強健性上有更好的表現。

並列摘要


The standard support vector machine regression it’s to use famous ε insensitive loss function. The basic idea is not care about errors as long as they are less than ε tube, in other words, just don’t exceed ε tube would not be support vectors. However, the sparseness principle of support vector, the final decision function will influence by support vectors. In the thesis, consider the influence of this insensitive region exemption will cause regression results are not robust and outlier distance is too far but then the decision function to generate contributions. The purpose of this thesis is to be in accordance with the above two reason to modify the existing standard support vector machine regression, hope to complete the robustness support vector machine regression. In the thesis, one different loss function is given. Reconstruct the support vector machine regression by ramp loss function and the concave-convex procedure, and simulation, verification proposed loss function. The proposed method comparison with standard support vector machine regression, have better accuracy rate the efficiency and stability.

參考文獻


[1]V. N. Vapnik, The Nature of Statistical Learning Theory, Springer-Verlag, New York, 1995.
[2]V. N. Vapnik, Statistical Learning Theory, John Wiley & Sons, New York, 1998
[3]A. L. Yuille and A. Rangarajan, “The concave-convex procedure,” Neural Computation 15(4):915–936, 2003.
[6]Y. Wu and Y. Liu, “Robust truncated hinge loss support vector machines,” Journal of the American Statistical Association, 102(479):974–983, 2007.
[7]O. Chapelle, “Training a support vector machine in the primal,” Neural Computation, 19(5):1155–1178, 2007.

延伸閱讀