Research Article An Adaptive Learning Rate for RBFNN Using

Research Article An Adaptive Learning Rate For Rbfnn Using-Free PDF

  • Date:16 Jan 2020
  • Views:73
  • Downloads:0
  • Pages:10
  • Size:2.05 MB

Share Pdf : Research Article An Adaptive Learning Rate For Rbfnn Using

Download and Preview : Research Article An Adaptive Learning Rate For Rbfnn Using


Report CopyRight/DMCA Form For : Research Article An Adaptive Learning Rate For Rbfnn Using


Transcription:

2 The Scientific World Journal,node 1 2 and is the nonlinear basis function. 1 Normally this function is taken as a Gaussian function of. u1 t w11 y1 t, width which dictates the effective range of input passing. w12 through the basis function The output is a weighted. sum of the outputs of the hidden layer given by, wP1 where the basis functions and weight vector are defined as. uM t 1 2 2,and the Gaussian basis function is,Figure 1 A MIMO RBF neural network. Consider a collection of input vectors with the,corresponding desired output vectors We also take.
as opposed to the perceptron structure where it is not possible into account noisy perturbations V in the desired signal. to separate them This in turn helps us to avoid mean value These perturbations can be due to model mismatch or to. theorem by using the relation of a priori estimation error measurement noise Assuming there exists an optimal weight. Another distinguished and good feature of our work in vector such that. contrast to the work in 11 is that it does not require the V 4. calculation of the derivative of the radial basis function for. the learning rate adaptation The RBFNN is presented with the given input output data. The paper is organized as follows Following the intro The objective is to estimate the unknown optimal. duction in Section 1 we present overview of RBFNN in weight Now starting with an initial guess 0 the weights. Section 2 Section 3 develops a deterministic framework for are updated recursively based on the LMS principle as. the robustness analysis of RBFNN The feedback structure. for lossless mapping is provided in Section 4 and as a result 1 5. a stability bound is derived in Section 5 In Section 6 an where is the learning and the error is defined as. intelligent adaptive rule is presented for the learning rate. of RBFNN Simulation results are presented in Section 7. to validate our theoretical findings Finally the concluding 6. remarks are given in Section 8 V, We define a priori and a posteriori error quantities as. 2 Radial Basis Functions Neural Networks, RBFNN is a type of feedforward neural network They are 7. used in a wide variety of contexts such as function approx 1. imation pattern recognition and time series prediction. Networks of this type have the universal approximation. where is the weight error vector symbolizing the, property 1 In these networks the learning involves only one difference between the optimal weight and its estimate as. layer with lesser computations A multi input multioutput Thus we can rewrite the as. RBFNN is shown in Figure 1 The RBFNN consists of an input. node a hidden layer with neurons and an output, node Each of the input nodes is connected to all the 9. nodes or neurons in the hidden layer through unity weights. direct connections While each of the hidden layer nodes. is connected to the output node through some weights for 10. example the th output node is connected with all the hidden 2. layer nodes by 1 each neuron finds, the distance normally applying Euclidean norm between the Consequently the weight error update equation satisfies the.
input and its center and passes the resulting scalar through a following recursion. nonlinearity So the output of the th hidden neuron is given. by where is the center of the th hidden layer,The Scientific World Journal 3. 3 A Deterministic Framework for where we have introduced a new parameter defined as. the Robustness of RBFNN 1, Robustness of an algorithm is defined as the consistency 2. in its estimation error with the disturbances in the sense. Thus it can be easily seen from the mapping 15 that the. that a minor increase in disturbances would lead to a. following three different scenarios exist depending upon the. smaller increase in its estimation error irrespective of the. value of learning rate, disturbances nature In order to study the robustness of. RBFNN we employ a pure deterministic framework without 2 1 for 0. assuming any prior knowledge of signal or noise statistics as 1 for. was used in 8 9 This is especially useful in situations where 2. prior statistical information is missing The robust design 1 for. would guarantee a desired level of robustness independent of 17. the noise statistics In a broad sense robustness would imply. that the ratio of an estimation error energy to the noise or The first two inequalities in the statement of 3 ascertain that. disturbance energy will be guaranteed to be upper bounded if the learning rate is chosen such that then the. by a positive constant mapping from signals to the singals. estimation error energy 1 is a lossless or contractive mapping Therefore. 1 12 a local energy bound is deduced that highlights a robustness. disturbance energy, property of the update recursion The energy bound depicts. Thus the ratio in 12 gives the assurance that the resulting that no matter what the value of the noise component V is. estimation error energy will be upper bounded by the and no matter how far the estimate is from the optimal. disturbance energy regardless of the nature and statistics of the sum of energies 1 2 2 will always. be smaller than or equal to the sum of energies 2, Next we will develop a lossless mapping between the.
estimation errors while adapting the weights from the th V2 Since this contractive property holds for each th. time instant to the 1 th time instant A lossless mapping is instant it should also hold globally over any interval In fact. the one that transforms to as in such a way that selecting over the interval 0 it follows. we have 2 2 for all that is the output energy that. does not exceed the input energy To set up the stage for the 2 2. analysis we define the disturbance error V as 0 V, Now by using the above definition and definitions of esti. mation errors we evaluate the energies of both sides of the 4 Feedback Structure for Lossless Mapping. weight error recursion 11 as follows, 2 2 In this section a feedback structure is established that. 2 explains a lossless mapping between estimation errors. 2 2 2 and To do so we first reformulate the a posteriori error. defined in 10 in terms of parameter as follows, 2 2 2 V Hence the weight error recursion in 11 will take the. 2 2 2 following form, By rearranging the relevant terms we finally arrive at. Thus the evaluation of energies of the both sides of the above. 2 2 2 equation leads to a similar form as 3 with equality showing a. lossless mapping between the estimation errors and it is found. which holds for all possible choices of the learning. 15 rate This implies that the mapping from the signals. 4 The Scientific World Journal, The small gain theorem states that the 2 stability of a.
q 1 feedback configuration such as the configuration in Figure 2. as special case requires that the product of norms of the. W t feedforward and feedback maps be strictly bounded by one. 8 9 12 In our case the norm of the feedforward map. is equal to one since it is lossless while the norm of the. feedback map is defined in 23 as Hence the condition. t t Ti 1 t ea t,1 guarantees an overall contractive map Therefore. t for 1 to hold we need to choose the learning rate. t such that for all,t 6 Designing Adaptive Learning Rate. Figure 2 A lossless mapping in a feedback structure for RBFNN. In this section we propose an adaptive mechanism to update. learning algorithm, the learning rate such that it gives faster convergence as. well as guaranteeing the 2 stability discussed in the previous. section For this we propose an adaptive mechanism similar. to the signals, 1 is to the one in 13 according to which the learning rate should. lossless be adapted via an estimate of error correlation In addition. Next by employing the relations 8 and 6 19 can be we upper bounded the maximum value of the learning rate to. set up as assure its 2 stability by employing the stability bound derived. in 24 To do so we propose the following adaptive rule 13. 1 2 0 1 0 25,max if 1 max,1 otherwise, V 1 where the parameter max is so chosen that it ensures 2.
stability given in 23 Thus max is given by, This relation shows that the overall mapping from the original max 27. weighted disturbances V to the resulting a priori, weighted estimation errors can be expressed in where the parameter is a positive quantity showing its. terms of a feedback structure as shown in Figure 2 dependency on its own past value and lies in the range. 0 1 usually we choose a value closer to 1 e g 0 97 while. the constant is a very small number The adaptation rule. 5 Stability Bound via Small Gain Theorem given by 25 and 26 suggests that the learning rate is. The stability of the structures of the form 22 can be studied large in the initial stage of adaptation due to larger error. via well known tools such as the small gain theorem 12 correlation and it decreases near steady state as the error. Thus conditions on the learning rate will be derived correlation of the algorithm also decreases once the algorithm. in order to guarantee a robust training algorithm as well as approaches the steady state Thus by adjusting the learning. faster convergence speeds rate online according to the rule given in 25 it will give. This will be achieved by establishing conditions under faster convergence as it allows faster adaptation of via an. which the feedback configuration is 2 stable in the sense that estimate of error energy due to the term 2 On the other. it should map a finite energy input noise sequence which hand the adaptation rule in 24 will guarantee the stability of. includes the noiseless case a special case V to a the feedback structure due to upper bounding via the stability. finite energy a priori error sequence limit in 24 that is 2 2 Thus it promises both faster. The small gain theorem for our scenario can be stated as convergence and a stable response. max 1 23 7 Simulation Results, The proposed adaptive learning rate is verified using various. According to the above definition is the maximum simulations for nonlinear identification and tracking control. absolute gain of the feedback loop over the interval 0 In all the cases the simulation is first performed for fixed. The Scientific World Journal 5, MSE for the identification of control valve Identification of control valve using proposed adaptive learning rate. using proposed adaptive learning rate 0 4,10 2 2 1 5 1 0 5 0 0 5 1 1 5 2.
10 20 30 40 50 60 70 80 90 100,Iterations, MSE for different fixed learning rates Measured output. of 0 01 0 03 0 06 and 0 08 Identified output,MSE using proposed adaptive learning rate. Figure 4 Actual and identified control valve using the proposed. Figure 3 MSE for the identification of control valve using fixed and adaptive learning rate. adaptive learning rates The fixed learning rates are 0 01 0 03 0 06. and 0 08 Learning rate trend for the identification of control valve. using proposed adaptive learning rate, learning rates The fixed learning rates are set after several 0 06. trials However these trials are not required when using. the proposed adaptive learning rate given by 24 25 and 0 04. Learning rate adaptive, 26 A comparison for different fixed learning rates and. adaptive learning rate is shown for each example along with 0 02. identification tracking and learning rate trends 0. 7 1 Identification of Nonlinear Control Valve In this simula 0 02. tion example the proposed adaptive learning rate is used in. the identification of a model that describes a valve for control. of fluid flow described in 14 as 0 06,28 0 10 20 30 40 50 60 70 80 90 100.
0 10 0 90 2, The model is identified using an RBFNN with 5 centers Figure 5 Learning rate trend for the identification of control valve. spaced at 0 5 The width of the center is set to 0 6 An output. additive noise of 30 dB SNR is considered in this example. Learning rates of 0 01 0 03 0 06 and 0 08 are used for model has been identified using RBFNN in 16 The Ham. fixed learning rate case The algorithm became unstable and merstein model used for simulation represents a nonlinear. values are near or greater than 0 08 After the simulations heat exchanger in cascade with linear dynamics The static. mean square errors MSE for fixed and adaptive learning nonlinearity and linear dynamics are given by 15. rates are shown in Figure 3 The lowest MSE achieved using. adaptive learning rate shows the performance of the proposed 31 549 41 732 2 24 201 3. Actual and identified control valve using the proposed. approach are shown in Figure 4 The learning rate trend can 0 4 1 0 35 2 0 15 V. be seen in Figure 5, 7 2 Identification of Nonlinearity in Hammerstein Model where is the input to the system is the intermediate. The proposed bound on the adaptive learning rate is used variable is the system output and V is additive noise. in the identification of the static nonlinearity in nonlin at the output Actual and identified heat exchangers using the. ear Hammerstein model defined in 15 The Hammerstein proposed approach are shown in Figure 7 The nonlinearity. 6 The Scientific World Journal, Mean square error using fixed and proposed adaptive learning rate Learning rate trend for the identification of Hammerstein. model using proposed adaptive learning rate,Learning rate. 0 5 10 15 20 25 30 35 40 45 50 0 03,0 5 10 15 20 25 30 35 40 45 50.
Research Article An Adaptive Learning Rate for RBFNN Using Time Domain Feedback Analysis Radial basis function neural networks are used in a variety of applications such as pattern recognition nonlinear identi cation control and time series prediction In this paper the learning alg orithm of radial basis function neural networks is analyzed in a feedbackstructure

Related Books