透過您的圖書館登入
IP:18.227.111.192
  • 期刊

An Implementation of Distributed Framework of Artificial Neural Network for Big Data Analysis

處理巨量資料分析之分散式類神經網路框架設計-以金融時間序列資料為例

摘要


In this research, we introduce a distributed framework of artificial neural network (ANN) to deal with the big data real‐time analysis and return proper outcomes in very short delay. The result of our experiment shows that training the distributed ANN model could be converged in 17 seconds on 24‐core clustering platform and learns that multi‐model with stratification strategy would obtain most true positive predictions with nearly 70% precision at voting threshold value equal to 0.7. In our system, ANNs are used in the data mining process for identifying patterns in financial time series. We implement a framework for training ANNs on a distributed computing platform. We adopt Apache Spark to build the base computing cluster because it is capable of high performance in‐memory computing. We investigate a number of distributed back propagation algorithms and techniques, especially ones for time series prediction, and incorporate them into our framework with some modifications. With various options for the details, we provide the user with flexibility in neural network modeling.

並列摘要


本研究設計一個分散式類神經網路框架以處理巨量資料之即時分析並能在極短的時間內得到不錯的結果。我們的實驗結果顯示在24 核心叢集平台上訓練分散式類神經網路模型可於17 秒收斂,進行預測時在0.7 投票閥值(voting threshold)設定下採用分層多重模型(multi-model with stratification)可獲得最多的真陽性結果且準確率達70%左右。在我們所建構的系統裡,類神經網路是用在資料採礦階段來發掘金融時間序列資料之模式。我們將訓練類神經網路的框架建置在分散式運算平台上,該平台我們採用具高效能記憶體內運算(in-memory computing)的Apache Spark 來建造底層基礎的運算叢集環境。我們評估了一些特別適用於預測金融時間序列資料的分散式後向傳導演算法,加以調整並整合進我們所設計的框架。同時,我們也提供了許多細部的選項,讓使用者在進行類神經網路建模時能有很高的客製化彈性。

參考文獻


Andonie, R.,Chronopoulos, A.,Grosu, D.,Galmeanu, H.(1998).Distributed backpropagation neural networks on a PVM heterogeneous system.Parallel and Distributed Computing and Systems Conference (PDCS'98).(Parallel and Distributed Computing and Systems Conference (PDCS'98)).
Chen, M.,Mao, S.,Zhang, Y.,Leung, V. C.(2014).Big data: Related technologies, challenges and future prospects.Springer.
Dahl, G.,McAvinney, A.,Newhall, T.(2008).Parallelizing neural network training for cluster systems.Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks.(Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks).
Dahl, G.,McAvinney, A.,Newhall, T.(2008).Parallelizing neural network training for cluster systems.Paper presented at the Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks.(Paper presented at the Proceedings of the IASTED International Conference on Parallel and Distributed Computing and Networks).
Feng, A. (2013). Spark and Hadoop at Yahoo: Brought to you by YARN. Retrieved from http://ampcamp.berkeley.edu/wp-content/uploads/2013/07/andy-feng-ampcamp-3-presentation-Spark_on_YARN.pdf

延伸閱讀