透過您的圖書館登入
IP:18.218.234.83
  • 學位論文

求解大規模一次正規化線性分類最佳化方法的比較

A Comparison of Optimization Methods for Large-scale L1-regularized Linear Classification

指導教授 : 林智仁

摘要


大規模線性分類器在文件分類以及計算語言學的領域上受到廣泛的運用。一次正規化的線性分類器則可以應用在特徵選擇上,然而其不可微分的性質卻造成求解時諸多困難。近幾年,各種求解一次正規化線性分類問題的最佳化方法相繼被提出,而至今卻並未受到嚴謹地討論與比較。本論文嚴格地探討一些具代表性的方法與實作上的相關議題,並且透過實驗進行徹底地比較。實驗結果顯示出:座標下降法在一般的狀況下,是最適合用來求解一次正規化線性分類問題的最佳化方法;然而,以牛頓法為基礎的最佳化方法則在求解後期能有最快速的收斂特性。

並列摘要


A large-scale linear classifier is useful for document classification and computational linguistics. The L1-regularized form can be used for feature selection, but its non-differentiability causes more difficulties in training. Various optimization methods have been proposed in recent years, but no serious comparison among them has been made. In this paper, we carefully address implementation issues of some representative methods and conduct a comprehensive comparison. Results show that coordinate descent type methods may be the most suitable in general situations though Newton method has the fastest final convergence.

參考文獻


Proceedings of the Twenty Fourth International Conference on Machine Learning
S. Benson and J. J. Mor e. A limited memory variable metric method for bound
constrained minimization. Technical report, Argonne Lab., 2001.
C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001.
Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm.

延伸閱讀