透過您的圖書館登入
IP:18.216.131.214
  • 學位論文

強化學習方法應用於地面地下水之聯合運用

Deep Reinforcement Learning for Conjunctive Use of Surface Water and Groundwater

指導教授 : 胡明哲
共同指導教授 : 蔡瑞彬(Jun-Pin Tsai)

摘要


近年來由於一些主要項目(民生、工業等)之用水需求日益增加,以及受極端天氣事件之影響造成地表水供應上的不穩定,因此在水資源管理上,聯合運用地表水和地下水資源逐漸成為重要的研究主題。此配水方法是永續地利用地表水和地下水資源以滿足各個時期的用水需求,並同時遵從於地表水可用量和地下水抽取造成洩降之限制。以往之研究多以結合模擬和各種最佳化演算法 (simulation-optimization) 所建立之模型來解決此類連續決策問題。然而本研究引入深度強化學習 (Deep reinforcement learning) 方法來尋求最佳之配水策略。強化學習是機器學習的一個子領域,以處理複雜的決策任務而聞名,並且已經在多項領域達到卓越成就。本研究將地表水和地下水系統之水文模擬模式整合於強化學習框架之中,將地面水與地下水模型編寫成環境(environment)提供機器自行互動,並透過與環境反覆多次互動之方式,以及利用在互動中所獲得之正向和負向之反饋,來訓練以神經網路所近似之策略函數,藉此來達到決策之優化並找出最佳之聯合用水策略。其中地下水模型是利用MODFLOW模式來模擬由抽水所引起之地下水位變化。此經由強化學習所訓練之聯合用水策略將以多種不同之獎勵(reward)設計所訓練出之結果去做比較,並以三種設計之入流情境(正常、缺水、極端)來評估其表現。結果表明,強化學習智能體透過與本研究構建之環境互動所獲得之經驗來不斷改進其用水策略,並逐漸學會在不同情況下皆能明智地做出適當選擇。

並列摘要


Conjunctive use of surface water and groundwater resources has become an important topic for water management due to the increasing water demands in multiple categories and the unstable availability of surface water impacted by extreme weather events. It is an allocation approach that sustainably utilizes both surface water and groundwater resources for satisfying water demands at every period, while subject to constraints on surface water availability and groundwater drawdowns limitation. Previous studies addressed such sequential decision-making problems mostly by adopting simulation-optimization models. In this study, however, Deep Reinforcement Learning (DRL), a subfield of machine learning well-known for dealing with complex decision-making tasks and has already gained a high reputation across various domains, is introduced herein for seeking the optimal policy. It is implemented by integrating hydrologic simulations of both surface water and groundwater systems into the RL-based optimization framework, where a custom environment containing surface water and groundwater models is established for the RL agent to interact with. The positive and negative feedback obtained throughout the agent-environment interaction process is utilized for optimizing conjunctive water use policies. Also, MODFLOW is applied to simulate the change in groundwater level caused by pumping. The RL-trained policies for conjunctive water management trained with different designs in the reward function are compared in this study, and the resulting performances are evaluated by three designed scenarios: normal year, dry year, and extreme year. Results show that the RL agent improves its policy through the interactive experiences with the environment we built and gradually learns to make decisions intelligently under different situations.

參考文獻


Abolpour, B., Javan, M. and Karamouz, M. 2007. Water allocation improvement in river basin using Adaptive Neural Fuzzy Reinforcement Learning approach. Applied Soft Computing 7(1), 265-285. https://doi.org/10.1016/j.asoc.2005.02.007.
Alibabaei, K., Gaspar, P.D., Assunção, E., Alirezazadeh, S. and Lima, T.M. 2022. Irrigation optimization with a deep reinforcement learning model: Case study on a site in Portugal. Agricultural Water Management 263. https://doi.org/10.1016/j.agwat.2022.107480.
Ashu, A.B. and Lee, S.-I. 2021. Simulation-Optimization Model for Conjunctive Management of Surface Water and Groundwater for Agricultural Use. Water 13(23), 3444. https://doi.org/10.3390/w13233444.
Bakker, M., Post, V., Langevin, C.D., Hughes, J.D., White, J.T., Starn, J.J. and Fienen, M.N. 2016. Scripting MODFLOW Model Development Using Python and FloPy. Groundwater 54(5), 733-739. https://doi.org/10.1111/gwat.12413.
Barlow, P.M., Ahlfeld, D.P. and Dickerman, D.C. 2003. Conjunctive-management models for sustained yield of stream-aquifer systems. Journal of Water Resources Planning and Management 129(1), 35-48. https://doi.org/10.1061/(ASCE)0733-9496(2003)129:1(35).

延伸閱讀