在本論文中,我們提出餘數校正法(residual correction)去善小腦模型的學習速度。這種方法我們稱之為加速法。加速法的學習結構與傳統小腦模型結構相近似,但加速法在輸出部分分成兩部分,一部分是傳統小腦學習,另一部分是餘數修正學習。在這兩部分,我們使用不同的學習率。 首先,我們利用三角歸屬函數取代傳統小腦模型學習之矩形歸屬函數,學習結果非常符合目標函數,所以我們知道利用三角歸屬函數可改善小腦模型學習之品質,然而此方法卻使得誤差收斂速度變慢,且誤差收斂情況有波動的情形。為了改善這些情況,我們提出加速法應用在三角歸屬函數之小腦模型(TCMAC)。 TCMAC加速法的學習誤差收斂速度比傳統TCMAC 的學習誤差收斂情形來的快,無論我們改變任何參數,包含訓練樣本數目(N)、感應參數(Ne) ,甚至使用不同的目標函數,加速法都適用。 TCMAC加速法較傳統TCMAC相比,可有效改善學習速度使誤差收斂速度加快及改善誤差收斂情況波動的情形。加速法可減少達成我們期望值之學習時間。
In this thesis, we propose the residual correction method to improve the learning speed of CMAC. This method is called the acceleration method by the anther. The output of our scheme is divided into two parts which are conventional learning and residual correction learning. By such a new learning scheme, the learning results and convergent rate are promoted apparently. At first, the acceleration methods for improving learn speed of CMAC is proposed. The concept of residual correction in numerical analysis is used in our proposed method. Then, according to the acceleration method, a new learning structure other than the traditional CMAC learning structure is designed. There are two outputs required in the new learning structure to get fine learning results. Next, the influence of variation of several important parameters of CMAC including the training sample, memory number, learning rate and membership function are discussed. Finally, according to the proposed method with the corresponding learning structure and variation of parameters, simulation results of illustrate examples are given to demonstration the excellent performance of our proposed method. It is believed that the research of this thesis will be helpful for the application of CMAC.