透過您的圖書館登入
IP:3.129.10.46
  • 學位論文

改良式演算法應用在訓練類神經網路-結合基因演算法

An Improved Algorithm Applied in Training Neural Network-Combined with Genetic Algorithm

指導教授 : 蔡榮發

摘要


最常用來訓練倒傳遞網路的演算法為最陡坡降法,因其降低訓練誤差的效果良好。然而該法亦存在一些缺點,如收斂緩慢以及容易陷入局部解而無法跳脫的問題。許多改良的方法被提出用以改進前述的缺失,如加入慣性項可以加速收斂;而使用全域搜尋法(例如機率登山類演算法或禁忌搜尋等)可以跳脫局部解。但這些改良的演算法也存在若干弱點。例如慣性項的加入有時對收斂速度助益不大;機率登山類演算法經常假設誤差函數呈現某種分佈,但有時情況並非如此。另外禁忌搜尋法雖然可能尋得全域最佳解,但由於使用太多隨機值,以致求解品質不穩定。同時經常需耗費大量的運算時間。本篇論文提出一個改良的方式,使得在不顯著增加訓練時間下,可以加速收斂並且有效降低訓練誤差。 然而,無論使用上述何種演算法都有求解的瓶頸,亦即訓練誤差到某水準時就難有進展甚至停滯。此時若能配合演化類演算法(如基因演算法),並使用適當的演化策略,理論上將能無限提升求解精度到極限。本篇論文所提的基因演算法著重在演化策略,而非有些研究者所專注的改良演算子。經初步實驗證實,在長久演化過程中,演化策略的影響力比演算子重大。由於結合使用的基因演算法採平行處理,所以不會增加訓練時間。

並列摘要


Gradient steepest descent (GSD) is often used to train the back-propagation neural network (BPN) because of its excellent performance of reducing training errors; however, it also has some drawbacks such as slow convergence and local optimum problem. Many improved methods are proposed to amend the aforementioned demerits; for example, momentum can be employed to accelerate convergence, and global search methods, e.g. probabilistic climbing search and taboo search (TS), etc. are introduced to fix the local optimum problem. Nevertheless, some weaknesses exist in those methods. For instance, added momentum may sometimes not work well in speeding up convergence; probabilistic climbing methods assume that error function follows a certain distribution, which may not always be true. While TS might approximate the global solutions, its quality of solution remains unstable on account of too many random variables and it often requires heavy computation. This paper proposes an improved method to hasten convergence and decrease the training errors effectively without much more training time. Even so, whatever algorithms which are mentioned above encounter bottleneck of achieving more accuracy of training. That is, diminishing of training errors becomes stagnant at some convergence level. If evolutionary algorithms e.g. genetic algorithm (GA) is combined, training accuracy may be in theory refined indefinitely to its maximum precision with proper evolving strategies. This paper lays emphasis on evolving strategies instead of evolving operators. It’s preliminarily proved in this paper that during the long evolution process, influence of evolving strategies is greater than that of evolving operators. The training time of combining GA would not grow as a result of parallel processing.

參考文獻


[4] Manabu Torii and Martin T. Hagan, “Stability of Steepest Descent with Momentum for Quadratic Functions,” IEEE Transactions on Neural Networks, Volume: 13, Issue: 3, 2002, pp. 752 – 756.
[6] Tsu-Tian Lee and Jiin Tsong Jeng., “An Improved Back-Propagation/Cauchy Machine Network,” Conference Proceedings, ISIE'93 - Budapest, IEEE International Symposium on Industrial Electronics, Budapest Hungary, 1993, pp. 321 – 326.
[8] Roberto Battiti and Giampietro Tecchiolli, “Training Neural Nets with the Reactive Tabu Search,” IEEE Transactions on Neural Networks, Volume: 6, No: 5, 1995, pp. 1185 – 1200.
[9] D. Karaboga and A. Kalinli, “Training Recurrent Neural Networks for Dynamic System Identification Using Parallel Tabu Search Algorithm,” Proceedings of the 1997 IEEE International Symposium on Intelligent Control, Istanbul, Turkey, 1997, pp. 113 – 118.
[11] Akio Yamazaki, Teresa B. Ludermir and Marcilio C.P. de Souto, “Global Optimization Methods for Designing and Training Neural Networks,” Proceedings. VII Brazilian Symposium on Neural Networks, 2002, pp. 136 – 141.

被引用紀錄


林政弘(2011)。結合整體經驗模態分解、基因演算法與極速學習機於財務時間序列預測之研究〔碩士論文,國立臺北科技大學〕。華藝線上圖書館。https://doi.org/10.6841/NTUT.2011.00054

延伸閱讀