透過您的圖書館登入
IP:216.73.216.156
  • 學位論文

線性迴歸模型的隨機梯度下降估計之漸近分析

Asymptotic Analysis of Stochastic Gradient-Based Estimators for Linear Regression Models

指導教授 : 翁久幸
本文將於2027/07/20開放下載。若您希望在開放下載時收到通知,可將文章加入收藏

摘要


隨機梯度下降法 (Stochastic Gradient Descent; SGD) 是一個常見的最佳化問題解決方法。因為在計算上只需要使用到損失函數的一階微分,梯度下降法有很好的計算效率。然而,當學習率中的比例常數設置不當時,SGD 將變得不穩定。因此,選擇合適的學習率是梯度下降更新中一個重要的課題。 另一種與 SGD 相關的演算法是平均梯度下降法 (Averaged Stochastic Gradient Descent; ASGD) 。ASGD 是在每次的 SGD 迭代之後增加一個平均步驟。當 ASGD 所使用的學習率遞減速度不低於 t^{−1},此方法被證明是漸進有效的。但事實上 ASGD 遭遇到一些問題,包含: (i) 由於平均的速度緩慢或學習率不當,ASGD 需要大量的樣本 (ii) 在高維度的問題中較不穩定。在這篇論文中,我們從理論和模擬兩個方面研究在線性迴歸模型中 SGD 和 ASGD 的漸近性質。雖然我們發現 ASGD 在某些情況下比 SGD 更敏感,但是藉由謹慎的學習率選擇可以改善結果。此外,增加訓練樣本也提升 ASGD 的表現。

並列摘要


Stochastic Gradient Descent (SGD) is a common solution for optimization problem. It is computationally efficient because the algorithm only relies on the first derivative of the loss function. However, it is known that SGD is lack of robustness due to the improper setting of proportionality constant. Therefore, choosing an appropriate learning rate is an important issue for SGD update. Another SGD-related algorithm is the Averaged Stochastic Gradient Descent (ASGD). It adds an averaging step after standard SGD procedure in every iteration. By using the learning rate not decreasing slower than t^{−1}, it is proved to be asymptotically efficient. But in practice, ASGD encounters some problems: (i) it requires large samples because of slow averaging or improper learning rate, (ii) it is unstable to high dimensionality. In this thesis, we study asymptotic properties of SGD and ASGD for linear regression models in both theory and simulation. We find that though ASGD is more sensitive to SGD in some situations, and a careful choice of learning rate could improve the results. Moreover, increasing training samples also improves the performances for ASGD.

參考文獻


Léon Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, pages 177–186. Springer, 2010.
Léon Bottou. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade, pages 421–436. Springer, 2012.
Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for largescale machine learning. Siam Review, 60(2):223–311, 2018.
Alexandre Défossez and Francis Bach. Averaged least-mean-squares: Bias-variance trade-offs and optimal sampling distributions. In Artificial Intelligence and Statistics, pages 205–213. PMLR, 2015.
Jack Kiefer and Jacob Wolfowitz. Stochastic estimation of the maximum of a regression function. The Annals of Mathematical Statistics, pages 462–466, 1952.

延伸閱讀