Abstract

This paper proposes a novel intelligent control scheme using type-2 fuzzy neural network (type-2 FNN) system. The control scheme is developed using a type-2 FNN controller and an adaptive compensator. The type-2 FNN combines the type-2 fuzzy logic system (FLS), neural network, and its learning algorithm using the optimal learning algorithm. The properties of type-1 FNN system parallel computation scheme and parameter convergence are easily extended to type-2 FNN systems. In addition, a robust adaptive control scheme which combines the adaptive type-2 FNN controller and compensated controller is proposed for nonlinear uncertain systems. Simulation results are presented to illustrate the effectiveness of our approach.

1. Introduction

In recent years, the fuzzy systems (FSs) and neural networks (NNs) have successfully been applied in nonlinear system identification and control [111]. The fuzzy neural network (FNN) system is realized with the FS in the NN structure. Thus, the FNN becomes an active subject in many areas due to its advantages, such as universal approximation, learning ability, and convergence of parameters [2, 8, 12]. The above advantages are established by training the parameters of FNN through iterations. In particular, the backpropagation (BP) algorithm (also known as gradient descent method) is usually adopted to tune the parameters of FNN, which consist of fuzzy sets and the weighting factors of NN [2, 8, 12, 13]. For each of the iterations, all parameters of FNN are adjusted to reduce the error between the desired and actual outputs. The cost function is the indicator adopted to minimize the error. Therefore, the dynamic and optimal learning rate for FNN has been proposed to accelerate the convergence of the BP algorithm [3, 1416].

To treat the “uncertainty information” problem, Zadeh proposed the concept of a type-2 fuzzy system which is an extension of ordinary fuzzy sets (called type-1) [17]. Subsequently, Mendel and Karnik developed a complete theory of type-2 fuzzy logic systems (FLSs) [7, 10, 1820]. These systems are characterized by IF-THEN rules, and type-2 fuzzy rules are more complex than type-1 fuzzy ones because there are some differences, for example, their antecedents and consequent sets are type-2 fuzzy sets [7, 10]. By using type-2 fuzzy systems (T2 FSs), we outperform the use of type-1 fuzzy systems (T1 FSs). The T2FSs are described by type-2 fuzzy membership functions that are characterized by more design degrees of freedom [19, 21]. This approach has been adopted in many applications, for example, system identification, nonlinear control, and signal processing [1, 4, 10, 14, 2123].

In this paper, the interval valued type-2 fuzzy membership functions and interval sets are utilized to implement in the network structure, called type-2 fuzzy neural network (type-2 FNN). Thus, the Type-2 FNN has more design degrees of freedom to enhance the performance. The analysis and applications of type-2 FNN are proposed. The type-2 FNN is a multilayered connectionist network for realizing the type-2 fuzzy inference system, and it can be constructed from a set of type-2 fuzzy rules. The type-2 FNN consists of type-2 fuzzy linguistic process as both the antecedent and consequent parts. The consequent part denotes the output through type reduction and defuzzification. The computation of interval type-2 FNN is more complex than that of type-1 one. In addition, we show that the characteristics of the FNN, fuzzy inference, and convergence properties, can be extended to type-2 FNN. According to the Lyapunov theorem, rigorous proofs are presented to guarantee the convergence of type-2 FNN and system stability. An adaptive control scheme using type-2 FNN is presented to treat the control problem of nonlinear uncertain system. Simulation results are shown to demonstrate the effectiveness and performance of the proposed type-2 FNN system.

The paper is organized as follows. In Section 2, we briefly introduce the type-2 FNN system. Section 3 presents the main result of adaptive control scheme and optimal learning for type-2 FNN controller. In Section 4, the simulation results of the control uncertain chaotic system are presented. Concluding remarks are given in Section 5.

2. Type-2 Fuzzy Neural Network Systems (Type-2 FNN)

As the results of previous literature and applications, the fuzzy neural systems by using type-2 fuzzy systems (T2 FSs) can outperform the use of type-1 fuzzy systems (T1 FSs). The T2FSs are described by type-2 fuzzy membership functions that are characterized by more design degrees of freedom [19, 21]. Therefore, using T2 FSs has the potential to outperform using T1FSs, especially for uncertain environments. Herein, the interval valued type-2 fuzzy membership functions and interval sets are utilized to implement the type-2 fuzzy neural network (type-2 FNN). Details are introduced as follows.

2.1. System Structure

The construction of the 𝑗th component of type-2 FNN system is shown in Figure 1, which is a kind of fuzzy inference system in the neural network structure [13, 9, 15]. When compared with type-1 FNN, the major difference is that the type-1 fuzzy membership functions (MFs) are replaced by type-2 ones and the interval sets of consequent part. Herein, we first indicate the signal propagation and the basic function of every node in each layer. In the following symbols, the subscript 𝑖𝑗 indicates the 𝑗th term of the 𝑖th input 𝑂(𝑘)𝑖𝑗, where 𝑗=1,,𝑙, and the superscript (𝑘) denotes the 𝑘th layer.

Layer 1: Input Layer
For the 𝑖th node of layer 1, the net input and output are represented as 𝑂𝑖(1)=𝑤𝑖(1)𝑥𝑖(1),(2.1) where the weights 𝑤𝑖(1)=1,𝑖=1,,𝑛 and 𝑥𝑖(1) represent the 𝑖th input to the 𝑖th node of layer 1.

Layer 2: Membership Layer
In this layer, each node performs a type-2 interval fuzzy MF, as shown in Figure 2. Note that when all T2 FSs are interval type, then the firing set and fired rule output set are interval values, and this simplifies all computational effort enormously. We here introduce two cases of the output of layer 2 [7, 10, 1820]. Case 1. For the Gaussian MF with uncertain mean as shown in Figure 2(a)𝑂(2)𝑖𝑗1=exp2𝑂𝑖(1)𝑚𝑖𝑗2𝜎𝑖𝑗2=𝑂(2)𝑖𝑗as𝑚𝑖𝑗=𝑚𝑖𝑗,𝑂(2)𝑖𝑗as𝑚𝑖𝑗=𝑚𝑖𝑗.(2.2)Case 2. For the Gaussian MF with uncertain variance as shown in Figure 2(b)𝑂(2)𝑖𝑗1=exp2𝑂𝑖(1)𝑚𝑖𝑗2𝜎𝑖𝑗2=𝑂(2)𝑖𝑗as𝜎𝑖𝑗=𝜎𝑖𝑗,𝑂(2)𝑖𝑗𝑎s𝜎𝑖𝑗=𝜎𝑖𝑗,(2.3) where 𝑚𝑖𝑗and 𝜎𝑖𝑗 represent the center (or mean) and the width (or variance), respectively. As shown in Figure 2, type-2 interval MFs can be represented as interval bound by upper and lower MFs, denoted by 𝜇𝐹𝑖 and 𝜇𝐹𝑖, respectively. Therefore, the output 𝑂(2)𝑖𝑗 is represented as [𝑂(2)𝑖𝑗,𝑂(2)𝑖𝑗].

Layer 3: Rule Layer
The links in this layer are employed to implement the antecedent matching, and they work like rule engine of the type-2 FLSs. Here, the operation chosen is the simple PRODUCT operation. Then, for the 𝑗th input rule node 𝑂𝑗(3)=𝑛𝑖=1𝑤(3)𝑖𝑗𝑂(2)𝑖𝑗=𝑂𝑗(3)=𝑛𝑖=1𝑤(3)𝑖𝑗𝑂(2)𝑖𝑗,𝑂𝑗(3)=𝑛𝑖=1𝑤(3)𝑖𝑗𝑂(2)𝑖𝑗,(2.4) where the weights 𝑤(3)𝑖𝑗 are set to be unity. Similar to layer 2, the output 𝑂(3)𝑖𝑗 is represented as [𝑂𝑗(3),𝑂𝑗(3)].

Layer 4: Output Layer
The links in this layer are employed to implement the consequent matching, type reduction, and defuzzification [7, 10, 14, 1820]. A type-reducer combines all fired-rule output sets in some way, just like a type-2 defuzzifier combines the type-1 rule output sets, which leads to a T1 FS that is called a type-reduced set. Finally, we defuzzify the type-reduced set to get a crisp output, that is, ̂𝑦=𝑂(4)=𝑂𝑅(4)+𝑂𝐿(4)2,(2.5) where 𝑂𝑅(4)=𝑙𝑗=1𝑓𝑅𝑗𝑤𝑗(4)=𝑅𝑗=1𝑂𝑗(3)𝑤𝑗(4)+𝑙𝑘=𝑅+1𝑂𝑗(3)𝑤𝑘(4),𝑂(2.6)𝐿(4)=𝑙𝑗=1𝑓𝐿𝑗𝑤𝑗(4)=𝐿𝑗=1𝑂𝑗(3)𝑤𝑗(4)+𝑙𝑘=𝐿+1𝑂𝑗(3)𝑤𝑘(4).(2.7)

According to the results of [2, 3], normalization is not used here. This simplifies the computation of type-2 FNN system in real-time applications. Moreover, in order to obtain 𝑂𝐿(4) and 𝑂𝑅(4), we need to find coefficients 𝑅 and 𝐿 first by the so-called Karnik-Mendel procedure [10, 19, 20]. Without loss of generality, it is assumed that the precomputed 𝑤𝑗(4) and 𝑤𝑗(4) are arranged in the ascending order, that is, 𝑤1(4)𝑤2(4)𝑤𝑙(4) and 𝑤1(4)𝑤2(4)𝑤𝑙(4) [7, 10, 20]. The usual utilized Karnik-Mendel procedure for type reduction is introduced as follows: R1:compute 𝑂𝑅(4) in (2.6) by initially setting 𝑓𝑅𝑗=1/2(𝑂𝑗(3)+𝑂𝑗(3)) for 𝑖=1,,𝑙, and let 𝑦𝑟𝑂𝑅(4);R2:find 𝑅(1𝑅𝑙1) such that 𝑤𝑅(4)𝑦𝑟𝑤(4)𝑅+1;R3:compute 𝑂𝑅(4) in (2.6) with 𝑓𝑅𝑗=𝑂𝑗(3) for 𝑗𝑅 and 𝑓𝑅𝑗=𝑂𝑗(3) for 𝑗>𝑅, and let 𝑦𝑟=𝑂𝑅(4);R4:if 𝑦𝑟𝑦𝑟, then go to step R5. If 𝑦𝑟=𝑦𝑟, then stop and set 𝑂𝑅(4)=𝑦𝑟;R5:set 𝑦𝑟 equal to 𝑦𝑟, and return to step R2.

Subsequently, the computation of 𝑂𝐿(4) is similar to the above procedure. Thus, the input/output representation of type-2 FNN system with uncertain mean is ̂𝑦𝑚𝑖𝑗,𝑚𝑖𝑗𝜎𝑖𝑗,𝑤𝑗,𝑤𝑗=12𝑅𝑗=1𝑂𝑗(3)𝑤𝑗(4)+𝑙𝑘=𝑅+1𝑂𝑘(3)𝑤𝑘(4)+𝐿𝑗=1𝑂𝑗(3)𝑤𝑗(4)+𝑙𝑘=𝐿+1𝑂𝑘(3)𝑤𝑘(4).(2.8) Thus, the adjustable parameters of type-2 FNN system with uncertain mean are 𝑚,𝑚, 𝜎,𝑤, and𝑤. Furthermore, the type-2 MFs with uncertain variance, as shown in Figure 2(b), can be simplified as𝑚̂𝑦𝑖𝑗,𝜎𝑖𝑗,𝜎𝑖𝑗,𝑤𝑗=12𝑙𝑗=1𝑂𝑗(3)+𝑂𝑗(3)𝑤𝑗(4).(2.9) The adjustable parameters of type-2 FNN system with uncertain variance are 𝑚,𝜎,𝜎,and𝑤. Therefore, when the rule number is 𝑅, the parameters number of 𝑛 input and one output type-2 FNN systems are (3𝑛+2)×𝑅 and (3𝑛+1)×𝑅 for uncertain mean and variance, respectively. As the same condition, the type-1 FNN system has (2𝑛+1)×𝑅 adjustable parameters (𝑚,𝜎,and𝑤).

2.2. Adaptive Control Scheme and Learning Algorithm

According to the results of [2, 3], the model reference adaptive schematic of type-2 FNN control system is shown in Figure 3. The control objective of the nonlinear plant is to make the system output 𝑌 follow the reference input 𝑅 and minimize the system error 𝑒. The purpose of the control scheme is to use the system control error 𝑒=𝑌𝑅 through the type-2 FNN controller (type-2 FNNC) to generate the proper control signal 𝑢𝐶. In order to minimize the system error, the weights of type-2 FNN are updated on line through the dynamic gradient descent learning algorithm. Let the cost function be minimized, which is defined as𝐸𝐶=12(𝑅𝑌)2.(2.10) That is, our goal is to minimize the tracking error. For training type-2 FNN, we utilize the well-known backpropagation algorithm with time-varying learning rate [4, 14, 15, 24]. It can be written as𝑊(𝑘+1)=𝑊(𝑘)+Δ𝑊=𝑊(𝑘)+𝜂(𝑘)𝜕𝐸𝐶(𝑘)𝜕𝑊,(2.11) where 𝜂(𝑘) and 𝑊=[𝑚,𝜎,𝜎,𝑤] represent the time-varying learning rate and tuning parameters of type-2 FNN with uncertain variance, respectively. To obtain on line performance, avoid local minimum, and guarantee system stability, we use the Lyapunov theory to derive an adaptive learning algorithm with a time-varying learning rate to speed up the convergence. From (2.10) and (2.11), we have Δ𝑊=𝜂(𝑘)𝜕𝐸𝑐𝜕𝑊=𝜂(𝑘)𝜕𝐸𝑐𝜕𝑌𝜕𝑌𝜕𝑢𝜕𝑢𝜕̂𝑦𝜕̂𝑦𝜕𝑊=𝜂(𝑘)(𝑅𝑌)𝑌𝑢𝜕̂𝑦𝜕𝑊,(2.12) where 𝑌𝑢𝜕𝑌/𝜕𝑢 denotes the system sensitivity and ̂𝑦 denotes the type-2 FNN controller’s output. The updated laws are represented as𝑚𝑖𝑗=𝑚𝑖𝑗+Δ𝑚𝑖𝑗,𝑤𝑗(4)=𝑤𝑗(4)+Δ𝑤𝑗(4),𝜎𝑖𝑗=𝜎𝑖𝑗+Δ𝜎𝑖𝑗,𝜎𝑖𝑗=𝜎𝑖𝑗+Δ𝜎𝑖𝑗.(2.13) Subsequently, we obtain the time-varying learning rate for the parameters using the results of [24]. We then have the following theorem.

Theorem 2.1. The type-2 FNN is trained by the backpropagation algorithm (2.11). Then, the closed loop of the nonlinear system is stable if the learning rates are chosen as 20<𝜂(𝑘)<𝑃max2,(2.14) where 𝑃max[𝑃1,max𝑃2,max𝑃3,max𝑃4,max𝑃5,max𝑃6,max]T. In addition, One has the following optimal learning rate preserving high-speed convergence and nonlinear system stability 𝜂opt1(𝑘)=𝑌𝑢(𝜕̂𝑦(𝑘)/𝜕𝑊)2.(2.15)

Proof. See the appendix.

As above, this is a model-free control approach for nonlinear system, that is, the designed control scheme does not use the system dynamic model. In practical systems, system dynamics are not usually known exactly and the sensitivity needs to be estimated. As discussed above, the system sensitivity 𝑌𝑢 can be obtained from type-2 FNNI in each iteration if the identifier is efficient and accurate (the type-2 FNN output 𝑌 can approximate the nonlinear plant output 𝑌). Thus, 𝑌𝑢 can be represented in terms of type-2 FNNI’s parameters 𝑚,𝜎,𝜎,𝑤. In general, the calculation of 𝑌𝑢 needs complex computation, and it is time varying. It usually fails for real-time control problem of industrial applications. Therefore, based on the system dynamic model, a simple modification in adaptation laws is introduced below.

3. Robust Adaptive Control Scheme Using Type-2 FNN System

3.1. Nonlinear System Description

An 𝑛th-order nonlinear dynamic system considered in the companion form or controllability canonical form is given by𝑥(𝑛)=𝐹(𝐱)+𝐺(𝐱)𝑢+𝐷,𝑦=𝑥,(3.1) where 𝑢 and 𝑦 are the control input and nonlinear system output, respectively. 𝐱=[𝑥̇𝑥𝑥(𝑛1)]𝑇𝑛×1, 𝐹() and 𝐺() are unknown nonlinear and continuous functions, 𝐷 is the bounded external disturbance or system uncertainty. In order to make system (3.1) controllable, 𝐺(𝐱) needs to be invertible for all 𝐱𝑈𝐶𝑛×1.

Our purpose is to design a robust adaptive control scheme which guarantees boundedness of all closed-loop variables and tracking of a given reference trajectory 𝐲𝐝=[𝑦𝑑̇𝑦𝑑𝑦𝑑(𝑛1)]𝑇𝑛. Define the tracking error 𝐞 as𝐞=𝐲𝐝𝐲=𝑒̇𝑒𝑒(𝑛1)𝑇𝑛.(3.2) If the plant dynamics is well known, the ideal control law 𝑢 can be designed by the feedback linearization approach [25]𝑢=𝐺(𝐱)1𝑥(𝑛)𝐹(𝐱)𝐷(𝑡)+𝐊𝐞,(3.3) where 𝐊=[𝑘𝑛𝑘𝑛1𝑘1]1×𝑛. Positive control gain 𝐊 is chosen as (𝑘𝑖>0,𝐼=1,,𝑛) such that all roots of the polynomial 𝑠𝑛+𝑘1𝑠(𝑛1)++𝑘𝑛=0 are in the open left-half plane. Substituting (3.3) into (3.1) yields𝑒(𝑛)+𝑘1𝑒(𝑛1)++𝑘𝑛𝑒=0(3.4) which implies that lim𝑡𝑒(𝑡)=0. However, the nonlinear functions 𝐹(𝐱) and 𝐺(𝐱) are not well known in general. Therefore, we cannot obtain the ideal control law (3.3). To solve this problem, the adaptive type-2 FNN control system is proposed to approximate the ideal control law (3.3).

3.2. Design for Type-2 FNN Control System

The configuration of the proposed robust type-2 FNN control system is depicted in Figure 4. The type-2 FNN controller 𝑢𝑓 is connected to the compensated controller 𝑢𝑚 to generate a control signal 𝑢𝐶. That is, the control law is given by𝑢𝐶=𝑢𝑓+𝑢𝑚.(3.5) From (2.9), we can define the control input by type-2 FNN with uncertain variance which is used to approximate ideal control (3.3) 𝑢𝑓=𝐰𝑇𝐎𝟑.(3.6) The minimum approximation error 𝜀 can be defined as𝜀=𝑢𝑢𝑓.(3.7)

By the universal approximation theorem [3, 8, 10], there exists optimal parameters 𝐰 such that 𝑢𝑓(𝐰)=𝑢𝑓 can approximate 𝑢 as close as possible. Consequently, (3.7) can be rewritten as𝑢=𝑢𝑓𝐰+𝜀=𝑇𝐎𝟑+𝜀.(3.8) From (3.1), (3.5), and (3.8), the system tracking error equation is rewritten as ̇𝐞=𝚲𝐞+𝐁𝐺𝑢𝑢𝐶,(3.9) where𝚲=010001𝑘𝑛𝑘𝑛1𝑘1,𝐁𝐺=00𝐺(𝐱).(3.10) Subsequently, we define ̃𝑢=𝑢𝑢𝐶, thus̃𝑢=𝑢+𝐺(𝐱)1𝑢𝐊𝐞𝑓+𝐊𝐞+𝑢𝑚=𝐰𝑇𝐎𝟑𝐰+𝜀𝑇𝐎𝟑𝑢𝑚=𝐰𝑇𝐎𝟑𝐰+𝜀𝑇𝐎𝟑𝐰+𝑇𝐎𝟑𝐰𝑇𝐎𝟑𝑢𝑚=𝐰𝑇𝐎𝟑+𝐰𝑇𝐎𝟑+𝜀𝑢𝑚.(3.11) Using the linearization technique, we have the Taylor expansion of 𝐎𝟑𝐎𝟑=12𝑙𝑇=𝜕1𝜕𝜕𝐦2𝜕𝜕𝐦𝑙𝜕𝐦𝑇|||||||||||||𝐦=𝐦𝐦𝐦+𝜕1𝜕𝝈𝐑𝜕2𝜕𝝈𝐑𝜕𝑙𝜕𝝈𝐑𝑇||||||||||||||𝜎𝐑=𝜎𝐑𝝈𝐑𝝈𝐑+𝜕1𝜕𝝈𝐋𝜕2𝜕𝝈𝐋𝜕𝑙𝜕𝝈𝐋𝑇||||||||||||||𝜎𝐋=𝜎𝐋𝝈𝐋𝝈𝐋+𝐻0,(3.12) where 𝐻0 represents higher-order terms. 𝑗=(𝑂𝑗(3)+𝑂𝑗(3))/2 and 𝑗=1,,𝑙 denote the 𝑗th-output of type-2 fuzzy antecedent matching. Substituting (3.12) into (3.11) gives𝐰̃𝑢=𝑇𝐎3+𝐎𝑇𝐂𝐦+𝐎𝑇𝐑𝝈𝐑+𝐎𝑇𝐋𝝈𝐋+𝐇0𝐰+𝜀+𝑇𝐎𝑇𝐂𝐦+𝐎𝑇𝐑𝝈𝐑+𝐎𝑇𝐋𝝈𝐋+𝐇0𝑢𝑚=𝐰𝑇𝐎𝟒+𝐰𝑇𝐎𝑇𝐂𝐦+𝐎𝑇𝐑𝝈𝐑+𝐎𝑇𝐋𝝈𝐋+𝐰𝑇𝐇0+𝐰𝑇𝐎𝑇𝐂𝐦+𝐎𝑇𝐑𝝈𝐑+𝐎𝑇𝐋𝝈𝐋𝑢𝑚=𝐰𝑇𝐎𝟒+𝐰𝑇𝐎𝑇𝐂𝐦+𝐎𝑇𝐑𝝈𝐑+𝐎𝑇𝐋𝝈𝐋𝑢𝑚+Δ0.(3.13) From (3.9) and (3.13), we have ̇𝐞=𝚲𝐞+𝐁𝐺𝐰𝑇𝐎4+𝐰𝐓𝐎𝑇𝐂𝐦+𝐎𝑇𝐑𝝈𝐑+𝐎𝑇𝐋𝝈𝐋+Δ0𝑢𝑚.(3.14)

Theorem 3.1. Consider the nonlinear system (3.1). The adaptive control input is presented in (3.5). Thus, the adaptive control input 𝑢𝑓 is designed as (3.6) and compensated controller 𝑢𝑚 is designed as (3.16) with the estimation gain ̂𝛿 given in (3.16), where 𝑤𝐰=[1𝑤2𝑤𝑙]𝑇𝑛×1, 𝛾𝑤, 𝛾𝑚, 𝛾𝑅, 𝛾𝐿, and 𝛾𝛿 are positive constant, 𝐏 is a symmetric positive definite matrix that satisfies 𝚲𝐓𝐏+𝐏𝚲=𝐐,(3.15) where 𝐐 is a symmetric positive definite matrix and is selected by the designer. As a result, the stability of the closed-loop system is guaranteed using the type-2 FNN system. ̇𝐰=𝛾𝑤𝐞𝐓𝐏𝐁𝐺𝐎𝟒,̇𝐦=𝛾𝑚𝐞𝐓𝐏𝐁𝐺𝐎𝐂̇𝐰,𝝈𝐑=𝛾𝑅𝐞𝐓𝐏𝐁𝐺𝐎𝐑̇𝐰,𝝈𝐋=𝛾𝐿𝐞𝐓𝐏𝐁𝐺𝐎𝐋𝐰,̇̂𝛿=𝛾𝛿||𝐞𝐓𝐏𝐁𝐺||,𝑢𝑚=̂𝐞𝛿sgn𝐓𝐏𝐁𝐺.(3.16)

Proof. We consider the following Lyapunov candidate 𝑉𝐞,𝐰,𝐦,𝝈𝐑,𝝈𝐋,̃=1𝛿,𝑡2𝐞𝑇1𝐏𝐞+2𝛾𝑤𝐰𝑇1𝐰+2𝛾𝑚𝐦𝑇1𝐦+2𝛾𝑅𝝈𝑇𝐑𝝈𝐑+12𝛾𝐿𝝈𝑇𝐋𝝈𝐋+12𝛾𝛿̃𝛿2,(3.17) where the estimation error of the uncertainty bound is defined as ̃̂𝛿𝛿=𝛿. Taking the derivative of the Lyapunov candidate (3.17) yields ̇𝑉𝐞,𝐰,𝐦,𝝈𝐑,𝝈𝐋,̃=1𝜹,𝐭2̇𝐞𝑇1𝐏𝐞+2𝐞𝑇𝐏̇1𝐞𝛾𝑤𝐰𝑇̇1𝐰𝛾𝑚𝐦𝑇̇1𝐦𝛾𝑅𝝈𝑇𝐑̇𝝈𝐑1𝛾𝐿𝝈𝑇𝐋̇𝝈𝐋1𝛾𝛿̃𝛿̇̂𝛿=12𝐞𝑇𝚲𝑇1𝐏+𝐏𝚲𝐞+2𝐁𝑇𝐺𝐏𝐞+𝐞𝑇𝐏𝐁𝐺1̃𝑢𝛾𝑤𝐰𝑇̇1𝐰𝛾𝑚𝐦𝑇̇𝐦1𝛾𝑅𝝈𝑇𝐑̇𝝈𝐑1𝛾𝐿𝝈𝑇𝐋̇𝝈𝐋1𝛾𝛿̃𝛿̇̂𝛿=12𝐞𝑇𝐐𝐞+𝐞𝑇𝐏𝐁𝐺𝐰𝑇𝐎𝟒+𝐰𝐓𝐎𝑇𝐦𝐦+𝐎𝑇𝐑𝝈𝐑+𝐎𝑇𝐋𝝈𝐋𝐞𝑇𝐏𝐁𝐺1(𝑢𝑚Δ𝑡)𝛾𝑤𝐰𝑇̇1𝐰𝛾𝑚𝐦𝑇̇1𝐦𝛾𝑅𝝈𝑇𝐑̇𝝈𝐑1𝛾𝐿𝝈𝑇𝐋̇𝝈𝐋1𝛾𝛿̃𝛿̇̂𝛿.(3.18) From (3.16), (3.18) can be re-written as ̇𝑉𝐞,𝐰,𝐦,𝝈𝐑,𝝈𝐋,̃1𝛿,𝑡=2𝐞𝑇𝐐𝐞𝐞𝑇𝐏𝐁𝐺𝑢𝑚Δ𝑡1𝛾𝛿̃𝛿̇̂𝛿1=2𝐞𝑇𝐐𝐞+𝐞𝑇𝐏𝐁𝐺Δ𝑡𝐞𝑇𝐏𝐁𝐺𝑢𝑚1𝛾𝛿̃𝛿̇̂𝛿1=2𝐞𝑇𝐐𝐞+𝐞𝑇𝐏𝐁𝐺Δ𝑡̂𝛿||𝐞𝑇𝐏𝐁𝐺||̂𝛿||𝐞𝛿𝑇𝐏𝐁𝐺||12𝐞𝑇||𝐞𝐐𝐞𝑇𝐏𝐁𝐺||||Δ𝛿𝑡||0.(3.19) As the above ̇𝑉(𝐞,𝐰,𝐦,𝝈𝐑,𝝈𝐋,̃𝛿,𝑡)0 is a negative semidefinite function, it implies that 𝐞, 𝐰, 𝐦, 𝝈𝐑, 𝝈𝐋 and ̃𝛿 are bounded. Let the function 𝜙(𝑡)=1/2𝐞𝑇̇𝐐𝐞𝑉(𝐞,𝐰,𝐦,𝝈𝐑,𝝈𝐋,̃𝛿,𝑡), and integrating the function with respect to time, we have 𝑡0𝜙(𝜏)𝑑𝜏𝑉𝐞,𝐰,𝐦,𝝈𝐑,𝝈𝐋,̃𝛿,0𝑉𝐞,𝐰,𝐦,𝝈𝐑,𝝈𝐋,̃𝛿,𝑡.(3.20) Since 𝑉(𝐞,𝐰,𝐦,𝝈𝐑,𝝈𝐋,̃𝛿,0) is bounded, and 𝑉(𝐞,𝐰,𝐦,𝝈𝐑,𝝈𝐋,̃𝛿,𝑡) is not increasing, that is, 𝑉(𝐞,𝐰,𝐦,𝝈𝐑,𝝈𝐋,̃𝛿,𝑡) is bounded. Thus, lim𝑡𝑡0𝜙(𝜏)𝑑𝜏.(3.21) Differentiating 𝜙(𝑡) with respect to time, we get ̇𝜙(𝑡)=𝐞𝑇𝐐̇𝐞.(3.22) Since all the variables on the right side of (3.19) are bounded, which implies that ̇𝐞 is also bounded. Therefore, 𝜙(𝑡) is uniformly continuous [25]. By Babalat’s lemma [25], it can be shown that lim𝑡𝜙(𝑡)=0. Therefore, lim𝑡𝐞(𝑡)0, the stability of the closed-loop system is guaranteed using type-2 FNN system.

Remark 3.2. Comparing the computation of control schemes shown in Figures 3 and 4, we can observe clearly that Figure 4 is less than Figure 3 (almost half) in each iteration. In addition, a compensator is designed to improve the control performance. Therefore, the adaptive control scheme shown in Figure 4 is suitable for practical applications.

3.3. Optimal Learning Rate Algorithm for Type-2 FNN Control System

According to the gradient method, the update laws of parameters of type-2 FNN system are shown in (2.12) and (2.13). In order to obtain the system sensitivity and reduce the computation complexity, herein, we have a comparison in (2.12) and (3.16). Rewrite (2.12) asΔ𝑤𝑗=𝜂𝑤(𝑘)𝑌𝑢𝑒𝑂𝑗(3)+𝑂𝑗(3)2,Δ𝑚𝑖𝑗=𝜂𝑚(𝑘)𝑌𝑢𝑒𝑤𝑗𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗+𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗,Δ𝜎𝑅𝑖𝑗=𝜂𝑅(𝑘)𝑌𝑢𝑒𝑤𝑗𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗+𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗,Δ𝜎𝐿𝑖𝑗=𝜂𝐿(𝑘)𝑌𝑢𝑒𝑤𝑗𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗+𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗,(3.23) and then transfer (3.16) to a discrete-time formΔ𝑤𝑗𝑡𝑠𝛾𝑤𝐞𝐓𝐏𝐁𝐺𝑂4𝑗=𝜌𝑤𝐞𝐓𝑂𝐏𝜒𝑗(3)+𝑂𝑗(3)2,Δ𝑚𝑖𝑗𝑡𝑠𝜌𝑚𝐞𝐓𝐏𝐁𝐺𝑂𝐶𝑖𝑗𝑤𝑗=𝜌𝑚𝐞𝐓𝑤𝐏𝜒𝑗𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗+𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗,Δ𝜎𝑅𝑖𝑗𝑡𝑠𝜌𝑅𝐞𝐓𝐏𝐁𝐺𝑂𝑅𝑖𝑗𝑤𝑗=𝜌𝑅𝐞𝐓𝑤𝐏𝜒𝑗𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗+𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗,Δ𝜎𝐿𝑖𝑗𝑡𝑠𝜌𝐿𝐞𝐓𝐏𝐁𝐺𝑂𝐿𝑖𝑗𝑤𝑗=𝜌𝐿𝐞𝐓𝑤𝐏𝜒𝑗𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗+𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗,(3.24) where 𝑡𝑠 denotes the sampling time, 𝝌=[001]𝑇𝑛×1 and 𝜌𝑖=𝑡𝑠𝛾𝑖𝐺(𝐱), 𝑖=𝑤,𝑚,𝑅,𝐿, can be viewed as the learning rate of parameters for the type-2 FNN system. Let 𝜌𝑖=𝜂𝑖,  𝑖=𝑤,𝑚,𝑅,𝐿 and compare each equation between (3.23) and (3.24), we then have 𝜂𝑖𝑌𝑢𝑒=𝜌𝑖𝐞𝑇𝐏𝝌,𝑖=𝑤,𝑚,𝑅,𝐿.(3.25) Thus, the system sensitivity can be replaced as𝑌𝑢=𝐞𝑇𝐏𝝌𝑒.(3.26) Therefore, the optimal learning rate that focuses on 𝑌𝑢 is chosen as𝜂+1(𝑘)=𝑌𝑢(𝜕̂𝑦(𝑘)/𝜕𝑊)2.(3.27) Details about the derivation of the optimal learning rate are introduced in the appendix.

4. Simulation Results: Tracking Control of Duffing Forced Oscillator System

Tracking control of Duffing forced oscillator system [26, 27] is considered to illustrate the effectiveness of our approach. Consider the following Duffing forced oscillator ̈𝑦(𝑡)+𝑐2̇𝑦(𝑡)+𝑐1𝑦(𝑡)+𝑦3(𝑡)=𝑐3𝑐cos4𝑡+𝑢(𝑡),(4.1) where 𝐶=[𝑐1𝑐2𝑐3𝑐4] are constant coefficients. Let 𝑥1=𝑦(𝑡) and 𝑥2=̇𝑦(𝑡), system (4.1) can then be rewritten as ̇𝑥1̇𝑥2=𝑥01001𝑥2+01𝑥(𝐹+𝐺𝑢+𝐷),𝑦=101𝑥2,(4.2) where 𝐹=𝑐1𝑥1𝑐2𝑥2(𝑥1)3+𝑐3cos(𝑐4𝑡), 𝐺=1, and 𝐷 denotes the external disturbance. A square-wave is assumed with amplitude ±0.5 and period 2𝜋. Here, we set 𝐶=[10121]. The sampling time is chosen as 0.01 second, and the initial state value, 𝐱(0)=[33]𝑇. The simulation results of Duffing forced oscillator are shown in Figure 5. The oscillation phenomenon is found. Our control objective is to use the adaptive type-2 FNN control scheme such that the output to track the desired trajectory. Herein, the following four cases are considered to have comparisons.Case  1:type-2 FNNC with uncertain variance using 𝜂+ (optimal learning rate) and 𝜂xed.Case  2:type-2 FNNC with uncertain mean using 𝜂+ (optimal learning rate) and 𝜂xed.Case  3:type-1 and type-2 FNNC with uncertain variance using the optimal learning rate 𝜂+.Case  4:type-1 and type-2 FNNC with uncertain mean using the optimal learning rate 𝜂+.

Case 1. Type-2 FNNC with uncertain variance using 𝜂+ (optimal learning rate) and 𝜂xed;  to compare the simulation results with different learning rates, we construct the controller using type-2 FNN system with uncertain variance. First, the parameters are chosen as 𝐊=44𝑇,𝐐=20443,𝐏=102.52.51.(4.3) The number of rules of type-2 FNN with uncertain variance is set to be eight, and the initial values of the coefficients are chosen as 𝐱(0)=33𝑇,𝑚𝑖=52251415514514141514255142,𝜎𝑖𝑗=107,𝜎𝑖𝑗=157,𝜎𝑖𝑗=57,𝑤𝑗=0,𝛿(0)=0.01,𝜂𝛿=0.01.(4.4) The fixed learning rate 𝜌𝑖,𝑖=𝑤,𝑚,𝑅,𝐿 is redefined as 𝜂xed=𝜂𝑚𝑠𝜌𝑖𝜂=0.1,𝑖=𝑚,𝑅,𝐿,𝑤𝜌𝑤=1.(4.5)
Note that the optimal learning rate will be invalid when the initial weight is 𝑤𝑗=0. According to the literature in [7], the learning rate is usually chosen as 103𝜂10. Thus, we have the optimal learning rate. Hence, the optimal learning rate 𝜂+ of type-2 FNN is defined as 𝜂+=𝜂+𝑤𝑗𝜂+𝑚𝑖𝑗𝜂+𝜎𝑖𝑗𝜂+𝜎𝑖𝑗min5,𝜕𝐸𝑐𝜕𝑤𝑗min1,𝜕𝐸𝑐𝜕𝑚𝑖𝑗min1,𝜕𝐸𝑐𝜕𝜎𝑖𝑗min1,𝜕𝐸𝑐𝜕𝜎𝑖𝑗.(4.6) The simulation results are shown in Figure 6.

Case 2. Type-2 FNNC with uncertain mean using 𝜂+ (optimal learning rate) and 𝜂xed; in this case, type-2 FNN system with uncertain mean is considered. First, the parameters, 𝐊,𝐐,𝐏, are chosen as (4.3). The rule number of type-2 FNN with uncertain mean is set to be eight, and the initial values of the coefficients is chosen as 𝐱(0)=33𝑇,𝑚𝑖=840724787872474078,𝜎𝑖𝑗=167,𝑤𝑗=𝑤𝑗=0,𝛿(0)=0.01,𝜂𝛿=0.01.(4.7) The fixed learning rate 𝜌𝑖,𝑖=𝑤,𝑚,𝑅,𝐿 is re-defined as 𝜂xed=𝜂𝑚𝑠𝜌𝑖𝜂=1,𝑖=𝑚,𝑅,𝐿,𝑤𝜌𝑤=5.(4.8) As the above discussion, the optimal learning rate 𝜂+ of type-2 FNN with uncertain mean is 𝜂+=𝜂+𝑤𝑗𝜂+𝑤𝑗𝜂+𝑚𝑖𝑗𝜂+𝑚𝑖𝑗𝜂+𝜎𝑖𝑗min5,𝜕𝐸𝑐𝜕𝑤𝑗min5,𝜕𝐸𝑐𝜕𝑤𝑗min1,𝜕𝐸𝑐𝜕𝑚𝑖𝑗min1,𝜕𝐸𝑐𝜕𝑚𝑖𝑗min1,𝜕𝐸𝑐𝜕𝜎𝑖𝑗.(4.9) Simulation results of Case 2 are shown in Figure 7.
From Figures 6 and 7, we observe that the proposed robust adaptive controller with appropriate design parameters can achieve tracking control and good performance. The oscillation with large magnitude phenomenon (state  𝑥2) was found in Figures 6(c) and 7(c) if the learning rates of type-2 FNNC are fixed. On the other hand, the optimal learning has better transient and steady performance compared with the simulation results using fixed learning rates.

Case 3. Type-1 and type-2 FNNC with uncertain variance using 𝜂+; to compare the simulation results with different controllers, we construct the controllers using type-2 FNN and type-1 FNN systems. First, the parameters, 𝐊,𝐐,𝐏, are chosen as (4.3). The number of rules for both FNN systems are set to be eight, and the initial values of the coefficients are chosen as in Case 1. The optimal learning rate 𝜂+ of type-1 FNN is defined as 𝜂+=𝜂+𝑤𝑗𝜂+𝑚𝑖𝑗𝜂+𝜎𝑖𝑗min5,𝜕𝐸𝑐𝜕𝑤𝑗min1,𝜕𝐸𝑐𝜕𝑚𝑖𝑗min1,𝜕𝐸𝑐𝜕𝜎𝑖𝑗.(4.10) The simulation results and comparison are shown in Figure 8.

Case 4. Type-1 and type-2 FNNC with uncertain mean using the optimal learning rate 𝜂+; to compare the simulation result with different controllers, we construct the controllers using type-1 FNN and type-2 FNN with uncertain mean. Firstly, the parameters, 𝐊,𝐐,𝐏, are chosen as (4.3).The rule number of both FNN systems are set to be eight, and the initial values of the coefficients are chosen as 𝐱(0)=33𝑇,𝑚𝑖=840724787872474078,𝜎𝑖𝑗=327,𝑤𝑗=𝑤𝑗=0,𝛿(0)=0.01,𝜂𝛿=0.01.(4.11) The optimal learning rate 𝜂+ of type-1 FNN is defined as (4.10). Simulation results of Case 4 are shown in Figure 9.
From Figures 8 and 9, we also find type-2 FNNCs have small convergent time, tracking error, and transient response. Besides, the type-1 FNNC has undershoot phenomenon and oscillation results in Figures 8(b), 8(c), and 9(c).

In addition, comparison results in parameters number are shown in Table 1. Rule numbers 2, 4, 6, 8 are chosen to obtain the comparison. The type-1 FNNC with 2 rules has unstable result and type-2 FNNC achieves the tracking problem. In addition, it can be observed that the tracking performance can be improved by increasing the adjustable parameters number for both type-2 and type-1 FNNCs. Besides, type-2 FNNC has smaller rules number to achieve desired specified tracking error due to more adjustable parameters. From Figures 6 and 7 and Table 2, the control performance of type-2 FNNC with uncertain variance is better than the results of type-2 FNNC with uncertain mean even it has smaller parameters.

Finally, a comparison results in magnitude of compensator are shown in Table 2. This shows the chattering magnitude of compensated controller in steady state. Since the compensator is used to cover the approximation error of type-2 FNNC, the corresponding control effort is proportional to the approximation error. Therefore, we can conclude that the control magnitude of compensator is inversely to the rule number.

The control performance of the proposed adaptive control scheme has been demonstrated in the above simulations. For a type-2 FNN controller within an optimal and fixed learning rate, we can easily find the difference in control input of the simulations. The type-2 FNN with an optimal learning rate has better performance. In addition, within type-2 and type-1 FNN controllers, the same phenomenon exists.

5. Conclusions

This paper has presented a type-2 FNN system and the corresponding optimal learning algorithm for its applications. Therefore, the previous results of the type-1 FNN have been extended to a type-2 one. Then, the adaptive type-2 FNN is employed to achieve the desired control performance. In the adaptive control input, the type-2 FNN controller is utilized to mimic an ideal control law with the optimal learning rate derived by the suitable substitute for system sensitivity, and the compensated controller is designed to recover the residual part of the approximation error. The closed-loop system stability has been guaranteed by the adaptive laws derived according to the Lyapunov theory. The optimal learning rate for better parameter convergence has also been guaranteed by the suitable substitute for system sensitivity. The simulation results have been presented to show the effectiveness of our approach.

Appendix

Proof of Theorem 2.1. Define the Lyapunov candidate function as in (2.10), that is, 𝑉(𝑘)=𝐸(𝑘)=𝐸𝐶, thus we have 1Δ𝑉(𝑘)=𝐸(𝑘+1)𝐸(𝑘)=2𝑒2(𝑘+1)𝑒2=1(𝑘)2[]Δ𝑒(𝑘)2𝑒(𝑘)+Δ𝑒(𝑘).(A.1) From (2.12) and (A.1) Δ𝑒(𝑘)=𝜕𝑒(𝑘)𝜕[]𝜕𝑊Δ𝑊=𝑅(𝑘)𝑌(𝑘)[]𝜕𝑊𝜂(𝑘)𝜕𝐸(𝑘)𝜕𝑊=𝜂(𝑘)𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝑊𝜕𝐸(𝑘)𝜕𝑊(A.2) then 𝜕𝐸(𝑘)=𝜕𝜕𝑊1𝜕𝑊2𝑒2(𝑘)=𝑒(𝑘)𝜕𝑒(𝑘)𝜕𝑊=𝑒(𝑘)𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝑊.(A.3) Therefore, (A.2) can be rewritten as 𝑌Δ𝑒(𝑘)=𝜂(𝑘)𝑢𝜕̂𝑦(𝑘)𝜕𝑊2𝑒(𝑘).(A.4) Then, (A.4) can be substituted into (A.1), that is, 1Δ𝑉(𝑘)=2𝑌𝜂(𝑘)𝑢𝜕̂𝑦(𝑘)𝜕𝑊2𝑌𝑒(𝑘)2𝑒(𝑘)𝜂(𝑘)𝑢𝜕̂𝑦(𝑘)𝜕𝑊2=1𝑒(𝑘)2𝜂(𝑘)𝑒2𝑌(𝑘)𝑢𝜕̂𝑦(𝑘)𝜕𝑊2𝑌(2+𝜂𝑘)𝑢𝜕̂𝑦(𝑘)𝜕𝑊2.(A.5) If the following inequality holds, then Δ𝑉(𝑘)0 will be held. 𝑌0<𝜂(𝑘)<2𝑢𝜕̂𝑦(𝑘)𝜕𝑊2.(A.6) since 𝑃max𝑃1,max𝑃2,max𝑃3,max𝑃4,max𝑃5,max𝑃6,max𝑇=max𝑖𝑗||||𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝑚𝑖𝑗||||max𝑖𝑗||||𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝑚𝑖𝑗||||max𝑖𝑗||||𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝜎𝑖𝑗||||×max𝑖𝑗||||𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝜎𝑖𝑗||||max𝑖𝑗||||𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝑤𝑖𝑗||||max𝑖𝑗||||𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝑤𝑖𝑗||||.(A.7) Note that we use Gaussian MFs with uncertain variance (as Figure 2(b)) to build the membership layer, thus 𝑚𝑖𝑗=𝑚𝑖𝑗 and 𝑤𝑖𝑗=𝑤𝑖𝑗. Therefore, (A.7) can be simplified as𝑃max𝑃𝑚𝑃2𝑃3𝑃𝑤𝑇=max𝑖𝑗||||𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝑚𝑖𝑗||||max𝑖𝑗||||𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝜎𝑖𝑗||||max𝑖𝑗||||𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝜎𝑖𝑗||||max𝑖𝑗||||𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝑤𝑖𝑗||||𝑇(A.8) where 𝑃𝑚=𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝑚𝑖𝑗=12𝑌𝑢𝑙𝑗=1𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗+𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗𝑤𝑗(4)=12𝑌𝑢𝑙𝑗=1𝑂𝑗(3)2𝑥𝑖𝑚𝑖𝑗𝜎𝑖𝑗2+𝑂𝑗(3)2𝑥𝑖𝑚𝑖𝑗𝜎𝑖𝑗2𝑤𝑗(4)12𝑌𝑢𝑙𝑗=12𝑥max𝑖𝑚𝑖𝑗𝜎𝑖𝑗22𝑥+max𝑖𝑚𝑖𝑗𝜎𝑖𝑗2𝑤𝑗(4),𝑃2=𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝜎𝑖𝑗=12𝑌𝑢𝑙𝑗=1𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗+𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗𝑤𝑗(4)=12𝑌𝑢𝑙𝑗=1𝑂𝑗(3)𝑥𝑖𝑚𝑖𝑗2𝜎𝑖𝑗3𝑤𝑗(4),𝑃3=𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝜎𝑖𝑗=12𝑌𝑢𝑙𝑗=1𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗+𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗𝑤𝑗(4)=12𝑌𝑢𝑙𝑗=1𝑂𝑗(3)𝑥𝑖𝑚𝑖𝑗2𝜎𝑖𝑗3𝑤𝑗(4),𝑃𝑤=𝑌𝑢𝜕̂𝑦(𝑘)𝜕𝑤𝑗(4)=12𝑌𝑢𝑙𝑗=1𝑂𝑗(3)+𝑂𝑗(3).(A.9) In addition, the error difference can be re-expressed as 𝑒(𝑘+1)=𝑒(𝑘)+Δ𝑒(𝑘)𝑒(𝑘)𝜂𝑊𝑌(𝑘)𝑒(𝑘)𝑢𝜕̂𝑦𝜕𝑊2.(A.10) Then, ||||=||||𝑒(𝑘+1)𝑒(𝑘)𝜂𝑊𝑌(𝑘)𝑒(𝑘)𝑢𝜕̂𝑦𝜕𝑊2||||||||||||𝑒(𝑘)1𝜂𝑊𝑌𝑢𝜕̂𝑦𝜕𝑊2||||.(A.11) Thus, the so-called optimal learning rate is obtained as [24] 𝜂+𝑊=𝑌𝑢𝜕̂𝑦𝜕𝑊2.(A.12) The detail of update laws for optimal learning rate by type-2 FNN with uncertain variance are 𝜂+𝑤𝑗(4)=𝑌𝑢𝜕̂𝑦𝜕𝑤𝑗(4)2=𝑌𝑢2𝑂𝑗(3)+𝑂𝑗(3)22,𝜂+𝑚𝑖𝑗=𝑌𝑢𝜕̂𝑦𝜕𝑚𝑖𝑗2=𝑌𝑢2𝜕̂𝑦𝜕𝑂𝑗(3)𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗+𝜕̂𝑦𝜕𝑂𝑗(3)𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗2=𝑌𝑢2𝑤𝑗(4)2𝑂𝑗(3)𝑥𝑖𝑚𝑖𝑗𝜎2𝑖𝑗+𝑂𝑗(3)𝑥𝑖𝑚𝑖𝑗𝜎2𝑖𝑗2,𝜂+𝜎𝑖𝑗=𝑌opt𝑢𝜕̂𝑦𝜕𝜎𝑖𝑗2=𝑌opt𝑢2𝜕̂𝑦𝜕𝑂𝑗(3)𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗+𝜕̂𝑦𝜕𝑂𝑗(3)𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗2=𝑌opt𝑢2𝑤𝑗(4)2𝑂𝑗(3)𝑥𝑖𝑚𝑖𝑗2𝜎3𝑖𝑗2,𝜂+𝜎𝑖𝑗=𝑌𝑢𝜕̂𝑦𝜕𝜎𝑖𝑗2=𝑌𝑢2𝜕̂𝑦𝜕𝑂𝑗(3)𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗+𝜕̂𝑦𝜕𝑂𝑗(3)𝜕𝑂𝑗(3)𝜕𝜎𝑖𝑗2=𝑌𝑢2𝑤𝑗(4)2𝑂𝑗(3)𝑥𝑖𝑚𝑖𝑗2𝜎3𝑖𝑗2.(A.13) By the same way, we also get the detail of update law for optimal learning rate by type-2 FNN with uncertain mean being 𝜂+𝑤(4)𝑅𝑗=𝑌𝑢𝜕̂𝑦𝜕𝑤(4)𝑅𝑗2=𝑌𝑢𝑂𝑗(3)22,𝜂+𝑤𝐿𝑗(4)=𝑌𝑢𝜕̂𝑦𝜕𝑤𝐿𝑗(4)2=𝑌𝑢𝑂𝑗(3)22,𝜂+𝑚𝑖𝑗=𝑌𝑢𝜕̂𝑦𝜕𝑚𝑖𝑗2=𝑌𝑢2𝜕̂𝑦𝜕𝑂𝑗(3)𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗+𝜕̂𝑦𝜕𝑂𝑗(3)𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗2=𝑌𝑢2𝑤(4)𝑅𝑗2𝑂𝑗(3)𝑥𝑖𝑚𝑖𝑗𝜎2𝑖𝑗2,𝜂+𝑚=𝑌𝑢𝜕̂𝑦𝜕𝑚𝑖𝑗2=𝑌𝑢2𝜕̂𝑦𝜕𝑂𝑗(3)𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗+𝜕̂𝑦𝜕𝑂𝑗(3)𝜕𝑂𝑗(3)𝜕𝑚𝑖𝑗2=𝑌𝑢2𝑤(4)𝐿𝑗2𝑂𝑗(3)𝑥𝑖𝑚𝑖𝑗𝜎2𝑖𝑗2.(A.14) This completes the proof.

Acknowledgments

The authors would like to thank the associate editor and anonymous reviewers for their valuable comments and helpful suggestions. This work was supported by the National Science Council, Taiwan, under Contract NSC-97-2221-E-155-033-MY3.