透過您的圖書館登入
IP:18.224.66.196
  • 學位論文

訊號分解對於長短期記憶預測股價準確率之影響

The Impact of Signal Decomposition on LSTM for Stock Price Prediction

指導教授 : 呂育道
若您是本文的作者,可授權文章由華藝線上圖書館中協助推廣。

摘要


金融市場的時間序列經常具有高雜訊、非穩態及非線性等特徵,因此提高預測準確性是非常具有挑戰性的任務,而這類型的資料與電機通訊領域中的訊號具有相同特性,因此可以將這些時間序列看成一種訊號。在通訊領域上經常使用小波分析和經驗模態分解等工具進行時頻域分析,將此類不平穩的時間序列拆解為不同頻率的分量以利進一步的分析。本研究嘗試將這些數學工具應用在台灣金融市場所形成的時間序列上,比較不同的分解方式對於長短期記憶類神經網路的股價預測結果有何影響。結果表明,輸入的訊號經過越高的分解次數並不代表能使模型獲得更好的表現。同時,研究中發現深度模型的超參數對於預測的準確率具有決定性之影響,故採用布穀鳥搜尋演算法來調整遞迴類神經網路的超參數,以利進一步的實驗。最終,在本論文選定的幾種分解方式中,對於預測台股大盤隔日漲跌趨勢的深度學習模型來說,其資料預處理以Coiflet-3為基底之小波轉換表現最佳,其精確度可達57.6%。

並列摘要


Most financial time series are inherently noisy, non-stationary, and non-linearity by default. Since these time series have the same characteristics as signals in electrical engineering, we can also regard them as signals. In this study, we try to combine a long short term memory neural network with the frequency-domain analysis techniques. Furthermore, the performance of a model highly depends on the hyper-parameters selection. For this reason, a meta-heuristic algorithm named Cuckoo Search is utilized to identify the suitable hyper-parameters of the model. Our goal is to compare different decomposition methods on the prediction accuracy of the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The results show that: (1) The higher decomposition level doesn't improve the model performance. (2) For data preprocessing, Coiflet-3 outperforms all compared wavelet basis functions. It predicts the direction of movement in TAIEX for the next day with 57.6% accuracy.

參考文獻


[1] J. Acevedo, S. MaldonadoBascón, P. Siegmann, S. LafuenteArroyo, and P. GilJiménez,“Tuning L1-SVM hyperparameters with modified radius margin bounds and simulated annealing,”in International Work-Conference on Artificial Neural Networks, San Sebastián, Spain, Jun. 2007, pp. 284–291.
[2] K. A. Althelaya, E. M. El-Alfy, and S. Mohammed,“Evaluation of bidirectional lstm for shortand longterm stock market prediction,”in 2018 9th International Conference on Information and Communication Systems, Irbid, Jordan, Apr. 2018, pp. 151–156.
[3] Y. Bengio,“Gradient-based optimization of hyperparameters,”Neural Computation, vol. 12, no. 8, pp. 1889–1900, Aug. 2000.
[4] J. Bergstra and Y. Bengio,“Random search for hyperparameter optimization,”Journal of Machine Learning Research, vol. 13, no. 10, pp. 281–305, Feb. 2012.
[5] J. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl,“Algorithms for hyper-parameter optimization,”in Proceedings of the 24th International Conference on Neural Information Processing Systems, Granada, Spain, Dec. 2011, pp. 2546–2554.

延伸閱讀