09 MRAS Gradient Method 0th 1st 2nd Order Systems

# 09 MRAS Gradient Method 0th 1st 2nd Order Systems - SYS635...

This preview shows pages 1–3. Sign up to view the full content.

SYS635 Adaptive Control Systems Ka C. Cheok MRAS Gradient Method 0th 1st 2nd Order Systems 22 Oct ‘05 1 MODEL REFERENCE ADAPTIVE SYSTEM (MRAS) MRAS Objective: Based on information y m , y, u and u c , devise a controller that automatically adjusts itself so that the behavior of the closed-loop control plant output (y) closely follows that (y m ) of the reference model. In other words, make y mimic y m . Illustration: Can you think of an example for such a system? Dance choreographer? Coach? 5.2 The MIT Rule (read Chap 5.2) Example of a cost function 2 1 ( ) ( ) where is a function of 2 J θ εθ ε = MIT Rule says that the time rate of change of is proportional to negative gradient of J w.r.t. . That is dd Jd dt d d γγ =− Integrating yields the continuous-time equation for updating : 00 ( ) () () tt J d t t d t dt d θθ γ =+ ∫∫ Since we can d td t , this leads to the Delta Rule that says the increment (delta) in is approximately dJ t d θγ ∆≈ −∆ and the discrete-time equation for updating is new old Water surface He/she is looking for the deepest spot in the river, should he/she go forward or backward? What logic did he used? This may be a vector although drawn as a scalar 0 J < 0 J > J y u u c y m Reference Model (in computer/or a physical system) Controller Physical Plant Adjustment Mechanism

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
SYS635 Adaptive Control Systems Ka C. Cheok MRAS Gradient Method 0th 1st 2nd Order Systems 22 Oct ‘05 2 () ( ) 1 2 2 1 Sigmoid function = 1 1 1 y 1 Derivative ( 1) 1 1 1 11 1 1 1 h h h h hh h h h h y e e e e ee h e e yy e e −−   +  ∂+ + == = + = ∂∂ + = ++ + + Application of MIT Rule to a single neuron model Let’s suppose we are shown a input-output pattern or phenomenon in the form of y m versus x curve. In the following example, it so happens that the pattern behaves as shown in the figure. We introduce a neuron model whose math description tends to produce an I/O relationship similar to the given pattern. That single neuron math model is described by a sigmoidal funtion 1 1 h yh w x b e + + where w is called a weight and b is a bias. The error between the pattern and the neuron model is given by 1 1 mm m h wx b y y e e ε −+ =− = − = + + We would like to find w and b so that y follows m y . A way to do this is to find w and b so that they minimize the error cost function 2 1 , 2 Jwb = The gradients of J w.r.t. w and b are: ( ) 2 2 1 1( 1 ) ( h m w xb JJ y h e yy x x wy h w y h w εε ∂− + ∂∂∂∂ + = = 2 21 1 ( 1)1 ( 1) h m w y h e by h b y h b + + = = An important property of sigmoid function Given pattern x y m y m
This is the end of the preview. Sign up to access the rest of the document.

## This note was uploaded on 04/17/2011 for the course SYS 635 taught by Professor Re during the Spring '11 term at Albany College of Pharmacy and Health Sciences.

### Page1 / 15

09 MRAS Gradient Method 0th 1st 2nd Order Systems - SYS635...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online