帳號:guest(18.220.34.198)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者查詢臺灣博碩士以作者查詢全國書目勘誤回報
作者(中):周家民
作者(英):Zhou, Jia-Min
論文名稱(中):應用深度學習於股票走勢分析-以台灣市場為例
論文名稱(英):Applying Deep Learning to Predict the Trend of Stock in Taiwan
指導教授(中):蔡炎龍
指導教授(英):Tsai, Yen-Lung
口試委員:張宜武
陳天進
口試委員(外文):Chang, Yi-Wu
Chen, Ten-Ging
學位類別:碩士
校院名稱:國立政治大學
系所名稱:應用數學系
出版年:2022
畢業學年度:110
語文別:英文
論文頁數:30
中文關鍵詞:深度學習神經網路卷積神經網路長短期記憶股票趨勢預測市場模擬
英文關鍵詞:Deep LearningNNCNNLSTMStock Trend ForecastMarket Simulation
Doi Url:http://doi.org/10.6814/NCCU202200774
相關次數:
  • 推薦推薦:0
  • 點閱點閱:68
  • 評分評分:系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔系統版面圖檔
  • 下載下載:0
  • gshot_favorites title msg收藏:0
在本篇論文中,我們使用了現有的 NN、CNN、LSTM 等模型去組合出一個更為複雜的合併模型,並使用新的前處理方法處理技術指標,透過預設的閥值或一些條件轉成新的指標。此外,還使用了一些較為新穎的技術來改善模型,例如:LeakyReLU、Nadam,讓模型更好訓練。與其他模型相比,在同樣的輸入下,合併模型大幅度優於其他的模型,也遠高於最簡單的預測方法。而加入前處理的指標後,更讓原本的合併模型以及 LSTM 模型的準確率分別提升了 4.13% 以及 8.54%。
除了單純模型預測外,我們也提出一個簡單的策略來應用模型的預測,並預設了一個閥值來達到更好的結果。扣除掉手續費、交易稅後,最多大約可以得到 7% 的回報。
In this paper, we use existing NN, CNN and LSTM models to combine a more complex merged model and use new preprocessing methods to handle the technical indicators, which are transformed into new indicators by pre-set thresholds or some conditions. In addition, some newer techniques are used to improve the model, such as LeakyReLU and Nadam, to make the model better trained. Compared with other models, the merged model is substantially better than other models with the same inputs and much better than the simplest prediction method. The addition of the
preprocessing indicators also improved the accuracy of the original merged model and LSTM model by 4.13% and 8.54%, respectively.
In addition to the pure model prediction, we also propose a simple strategy to apply the model prediction with a pre-set threshold to achieve better results. The maximum return is about 7% after deducting the handling fee and transaction tax.
謝辭 i
中文摘要 ii
Abstract iii
1 Introduction 1
2 Deep Learning 3
2.1 Neural Networks 4
2.2 Fully Connected Neural Networks 4
2.3 Activation Function 5
2.4 Loss Function 7
2.5 Gradient Descent Method 8
2.6 Dropout 11
2.7 L2 Regularization 11
3 Convolutional Neural Network 12
4 Long Short-Term Memory 14
4.1 Recurrent Neural Network 14
4.2 LSTM 15
5 Prediction System 17
5.1 Data Set 17
5.2 Data preprocessing 17
5.2.1 Technical Indicators 18
5.2.2 Rolling Window 21
5.3 Model Settings 21
5.4 Model Structure 22
5.5 Metrics 24
5.6 Result 25
6 Market Simulation 27
6.1 Strategy 27
6.2 Result 28
7 Conclusion 29
Bibliography 30
[1] Kunihiko Fukushima and Sei Miyake. Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition. In Competition and cooperation in
neural nets, pages 267–285. Springer, 1982.
[2] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan RSalakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors.
arXiv preprint arXiv:1207.0580, 2012.
[3] David H Hubel. Single unit activity in striate cortex of unrestrained cats. The Journal of physiology, 147(2):226, 1959.
[4] David H Hubel and Torsten N Wiesel. Receptive fields of single neurones in the cat’s striate cortex. The Journal of physiology, 148(3):574, 1959.
[5] WS McCullock and W Pitts. A logical calculus of ideas immanent in nervous activity. archive copy of 27 november 2007 on wayback machine. Avtomaty [Automated Devices] Moscow, Inostr. Lit. publ, pages 363–384, 1956.
[6] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc
Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
[7] Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
(此全文20250705後開放瀏覽)
電子全文
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *