透過您的圖書館登入
IP:3.145.105.105
  • 學位論文

智慧型控制演算法用於STATCOM對低電壓穿越能力上的提升

Improve Capability of LVRT by STATCOM with Intelligent Control

指導教授 : 劉志文

摘要


由於全球溫室效應、節能減碳需求以及石化能源價格不斷上漲,因此將來分散式再生能源勢必逐漸普遍化,而系統將面臨電源調度、系統衝擊及穩定性問題。本篇論文將專注於利用靜態同步補償器與動態啟斷電阻、電感三者的結合,幫助微電網系統在LVRT(Low Voltage Ride Through) 上的能力提升,同時使同步發電機能夠在低電壓情況下,增加短期間內的穿越能力,並且加強系統恢復期間的穩定度。 考慮微電網再生能源供應不穩定而導致系統的時變變化,本論文中採用模糊神經網路(Neural-Fuzzy)線上學習(on-line learning)的方式,將套用至STATCOM的控制器,以達成適應性以及非線性控制的效果,使其在各種情況下能有更穩且更快的控制輸出結果。 本研究以MATLAB/Simulink套裝軟件作為模擬平台,以驗證模糊神經網路控制的可行性以及觀察對LVRT性能上的提升效果。

並列摘要


Because of global warming ,carbon reduction requirements and the increasing price of fossil energy, renewable energy will become gradually generalization that power system will face scheduling, system impact and stability problems. This paper focuses on the use of STATCOM, series dynamic break resistance and series dynamic break inductance combination those technologies to enhance micro-grid LVRT capacity. Synchronous generator can also make low-voltage conditions, an increase in tolerance within a short period of service capabilities, and improve the system stability during the recovery period. Taking into account the instability of micro-grid due to intermittence of renewable energy supply, in all cases to have more stable and faster control output ,this thesis utilizes fuzzy neural networks online learning approach to control STATCOM, aiming at achieving the effective ,adaptive and nonlinear control. In this study, MATLAB / Simulink software package as a simulation platform to validate the feasibility of fuzzy neural networks control, and to observe LVRT performance improvement results.

參考文獻


[1] Gyugyi, “Reactive Power Generator and Control by Thyristor Circuit” IEEE Trans. on Industry Applications , vol. IA-15,no.5, pp.521-531, 1979
[2] C. Schauder and H. Mehta, “Vector Analysis and Control of Advanced Static VAR Compensators,” Pro. Inst. Eletr .Eng –C, vol.140, no.4, pp.299-306.1993
[4] P. Werbos, The Roots of Backpropagation: From Ordered Derivatives to Neural Networks and Political Forecasting. New York: Wiley, 1994, 0-471-59897-6.
[5] B. Widrow, N. Gupta, and S. Mitra, “Punish/reward: Learning with a critic in adaptive threshold systems,” IEEE Trans. Syst., Man, Cybern., vol. SMC-3, no. 5, pp. 455–465, 1973.
[6] P. J.Werbos, “Amenu of designs for reinforcement learning over time,” in Neural Networks for Control. Cambridge, MA: MIT Press, 1990, pp. 67–95.

延伸閱讀