在目前的行動通訊網路中,控制換手程序的換手參數是透過長時間觀察基地台內換手錯誤的情況再透過人工經驗調整的,此舉不僅不一定能改善換手錯誤的情況還附帶高的營運成本且十分費時;換手參數的設定與使用者的移動狀態有著相當大的關係,不適當的換手參數設定會造成換手觸發時機太早或太晚,進而發生斷線(Radio Link Failure)、乒乓效應(Ping-Pong Effect)和換手錯誤(Handover Failure)等問題;並且近來換手參數的相關研究主要都是針對大型細胞(Macro-cell)的情境,考慮不同細胞類型的混和型細胞網路則很少。於是本論文將針對加入微型細胞(Picocell)的混和型細胞網路,導入增強式學習(Reinforcement learing)中Q-learning的概念提出一可根據個別細胞情況動態調整換手參數的演算法,進而降低換手問題的發生率使整體網路效能提升。
In currently deployed mobile networks, handover parameter is done manually over a long time frame. This way not only comes high OPEX but also time-consuming. Handover parameter settings, such as Time-To-Trigger (TTT), Hysteresis, have great effect on mobility performance. Inappropriate handover parameter settings can negatively affect user experience and waste network resources by causing Radio Link Failure, Ping-Pong Effect and Handover Failure. There have been researchers studied the handover parameter optimization in LTE system, which mainly pay attention to the macro-cell scenario. However, Picocell scenario receives less attention. So our target is design an adaptive self-optimization method that can dynamically tuning handover parameters for individual cell situation base on Q-learning Algorithm to improve the overall network performance and reduce handover problems.