Unformatted text preview: Chapter 1 Linear Dynamical Systems
1.1 System classifications and descriptions A system is a collection of elements that interacts with its environment via a set of input variables u and output variables y. Systems can be classified in different ways. Continuous time versus Discrete time
A continuous time system evolves with time indices t , whereas a discrete time system evolves with time indices t Z = {. . . , 3, 2, 1, 0, 1, 2, . . .}. Usually, the symbol k instead of t is used to denote discrete time indices. Examples of continuous time systems are physical systems such as pendulums, servomechanisms, etc. An examples of a discrete time system is mutual funds whose valuation is done once a day. An important class of systems are sampled data systems. The use of modern digital computers in data processing and computations means that data from a continuous time system is "sampled" at regular time intervals. Thus, a continuous time system appears as a discrete time system to the controller (computer) due to the sampling of the system outputs. If intersample behaviors are essential, a sampled data system should be analyzed as a continuous time system. Static versus dynamic
The system is a static system if its output depends only on its present input. i.e. (there exists) a function f (u, t) such that for all t T , y(t) = f (u(t), t). (1.1) An example of a static system is a simple light switch in which the switch position determines if the light is on or not. A static time invariant system is one with y(t) = f (u(t)) for all t. To determine the output of a static system at any time t, the input value only at t is needed. Again, a light switch is a static time invariant system. A static timevarying system is one with timevarying parameters such as external disturbance signals. An example is a flow control valve (Fig.1.1), whose output flow rate Q is given as 2Ps (t) Q = uA (1.2) 3 4 c Perry Y.Li P A uA Q Figure 1.1: Flow control valve: a static time invariant system Fs x P Q uA A Figure 1.2: Flow control valve: a dynamic time invariant system where u [0, 1] is the input, A is orifice area, Ps is flow pressure and is fluid density. Here Ps is a timevarying parameter that affects the static output Q. In contrast, a (causal) dynamic system requires past input to determine the system output. i.e. to determine y(t) one needs to know u( ), (, t]. An example of a dynamic time invariant system is the flow control valve shown in Fig. 1.2. The fluid pressure Ps is constant. However, the flow rate history is a function of the force F (t) acting on the valve. It is necessary to know the time history of the forcing function F (t) in order to determine the flow rate at any time. The position x(t) of the valve is governed by the differential equation x = F (t)  bx  kx (1.3) where k is the spring constant and b is the damping factor. For a circular pipe of radius R, the flow rate is then given by: x2 2Ps (t) Q = 2A (1.4) R University of Minnesota ME 8281: Advanced Control Systems Design, 20012012
Orbiting pendulum 5 Earth Figure 1.3: Example of a dynamic time varying system A dynamic time varying system is shown in Fig. 1.3. Here a pendulum of length l and mass m orbits around the earth in an elliptical path. The gravitational acceleration g on the pendulum is a function of distance from the center of the earth, which in turn is a function of time, r(t). g= GMearth r2 (t) where G is the universal gravitational constant. Hence, the frequency of oscillations executed by the pendulum is also a dynamic function of time. g(t) (t) = l As another example, consider the bank account as the system. Let the streams of deposits and withdrawals be the inputs to the bank account and the balance be the output. It is a dynamic system because knowing the deposits and withdrawals today is not enough to know the bank balance today. One needs to know all the past deposits and withdrawals. Alternatively, one can know the so called state at one time . . . Time varying versus time invariant system
The examples above illustrate that a system can be both static and time varying, or dynamic and time invariant, or static and time invariant, or dynamic and time varying. In some sense, if the laws of physics are considered to be fixed and time invariant, if sufficient details of the system is modeled, all systems are time invariant since any time variation (including the input function) are due to dynamics of a larger system ..... 1.2 State Determined Dynamical Systems The state of a dynamic system at time t0 , x(t0 ), is the extra piece of information needed, so that given the input trajectory u( ), t0 , one is able to determine the behavior of the system for all times t t0 . The behaviors are usually captured by defining appropriate outputs y(t). Note that information about the input before t0 is not necessary. State is not unique. Two different pieces of information can both be valid states of the system. What constitutes a state depends on what behaviors are of interest. Some authors require a state 6 c Perry Y.Li to be a minimal piece of information. In these notes, we do not require this to be so. Example: Consider a car with input u(t) being its acceleration. Let y(t) be the position of the car. 1. If the behavior of interest is just the speed of the car, then x(t) = y(t) can be used as the state. It is qualified to be a state because given u( ), [t0 , t], the speed at t is obtained by: v(t) = y(t) = x(t0 ) + t u( )d.
t0 2. If the behavior of interest is the position of the car, then xa (t) = the state. y(t) y(t) 2 can be used as y(t) + 2y(t) 3. An alternate state might be xb (t) = . Obviously, since we can determine the y(t) old state vector xa (t) from this alternate one xb (t), and vice versa, both are valid state vectors. This illustrates the point that state vectors are not unique. Remark 1.2.1 1. If y(t) is defined to be the behavior of interest, then by taking t = t0 , the definition of a state determined system implies that one can determine y(t) from the state x(t) and input u(t), at time t. i.e. there is a static output function h(, , ) so that the output y(t) is given by: y(t) = h(x(t), u(t), t) h : (x, u, t) y(t) is called the output readout map. 2. The usual representation of continuous time dynamical system is given by the form: x = f (x, u, t) y = h(x, u, t) and for discrete time system, x(k + 1) = f (x(k), u(k), k) y(k) = h(x(k), u(k), k) 3. Notice that a state determined dynamic system defines, for every pair of initial and final times, t0 and t1 , a mapping (or transformation) of the initial state x(t0 ) = x0 and input trajectory u( ), [t0 , t1 ] to the state at a time t1 , x(t1 ). In these notes, we shall use the notation: s(t1 , t0 , x0 , u()) to denote this state transition mapping 1 . i.e. x(t1 ) = s(t1 , t0 , x0 , u()) if the initial state at time t0 is x0 , and the input trajectory is given by u().
1 In class (Spring 2008), we might have used the notation s(x0 , u(), t0 , t1 ). As long as you are consistent, either way is okay. Please be careful. University of Minnesota ME 8281: Advanced Control Systems Design, 20012012
off on 7 0.5 light bulb Figure 1.4: Toggle switch : a discrete system
text3 x0 text1 x1 text2 x2 Figure 1.5: Semigroup property. Replace text1 by s(t1 , t0 , x0 , u), u( [t0 , t1 ]); text2 by s(t2 , t1 , x1 , u), u( [t1 , t2 ]), and text3 by s(t2 , t0 , x0 , u), u( [t0 , t2 ]). 4. Sometimes, one encounters the term discrete system. Precisely speaking, this means that the state variables can take on discrete values (e.g. x {on, off, 0, 1, 2} as opposed to continuous values (i.e. x ). If the state consists of discrete variables and continuous variables, the system is called a hybrid system. A toggle switch (Fig. 1.4) is an example of a discrete (on, off) system. A state transition map must satisfy two important properties: State transition property For any t0 t1 , if two input signals u1 () and u2 () are such that u1 (t) = u2 (t) t [t0 , t1 ], then s(t1 , t0 , x0 , u1 ()) = s(t1 , t0 , x0 , u2 ()) i.e. if x(t0 ) = x0 , then the final state x(t1 ) depends only on past inputs (from t1 ) that occur after t0 , when the initial state is specified. Systems like this are called causal because the state does not depend on future inputs. Semigroup property(Fig. 1.5) For all t2 t1 t0 T , for all x0 , and for all u(), s(t2 , t1 , x(t1 ), u) = s(t2 , t1 , s(t1 , t0 , x0 , u), u) = s(t2 , t0 , x0 , u) Thus, when calculating the state at time t2 , we can first calculate the state at some intermediate time t1 , and then utilize this result to calculate the state at time t2 in terms of x(t1 ) and u(t) for t [t1 , t2 ]. Example: Consider a system represented as: x = f (x, u, t) y = h(x, u, t) 8 c Perry Y.Li with initial condition x0 and control u( ), [t0 , t1 ]. Then the state transition map is given as: t1 x(t1 ) = s(t1 , t0 , x0 , u()) = x0 + f (x( ), u( ), ( ))d (1.5)
t0 1.3 Jacobian linearization A continuous time, physical dynamic system is typically nonlinear and is represented by: x = f (x, u, t) y = h(x, u, t) (1.6) (1.7) One can find an approximate system that is linear to represent the behavior of the nonlinear system as it deviates from some nominal trajectory. 1. Find nominal input / trajectory One needs to find the nominal input u(), nominal state trajectory x(t), s.t. (1.6) (and (1.7)) are satisfied. Typically, a nominal input (or output) is given, then the nominal state and the nominal output (or input) can be determined by solving for all t, x(t) = f ((t), u(t), t), y (t) = h((t), u(t), t) x x 2. Define perturbation. Let u := u  u and x := x  x, y := y  y . x(t) = x(t)  x(t) = f (x(t) + x(t), u(t) + u(t), t)  f ((t), u(t), t) x 3. Expand RHS in Taylor's series and truncate to 1st order term x(t) = f (x(t) + x(t), u(t) + u(t), t)  f ((t), u(t), t) x f f f ((t), u(t), t) + x ((t), u(t), t)x(t) + x ((t), u(t), t)u(t) + H.O.T.  f ((t), u(t), t) x x x u f f = ((t), u(t), t)x(t) + x ((t), u(t), t)u(t) x x u = A(t)x(t) + B(t)u(t) Similarly, y(t) h h ((t), u(t), t)x(t) + x ((t), u(t), t)u(t) x x u = C(t)x(t) + D(t)u(t) Notice that if x n , u m , y p , then A(t) nn , B(t) nm , C(t) pn and D(t) pm . Be careful that to obtain the actual input, state and output, one needs to recombine with the nominals: u(t) u(t) + u(t) x(t) x(t) + x(t) y(t) y (t) + y(t) University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 9 l m 0.5 mg Figure 1.6: Example for Jacobian linearization: Pendulum A similar procedure can be applied to a nonlinear discrete time system x(k + 1) = f (x(k), u(k), k) y(k) = h(x(k), u(k), k) about the nominal input/state/output s.t. x(k + 1) = f ((k), u(k), k), y (k) = h((k), u(k), k) to x x obtain the linear approximation, x(k + 1) = A(k)x(k) + B(k)u(k) y(k) = C(k)x(k) + D(k)u(k) with u(t) u(t) + u(t) x(t) x(t) + x(t) y(t) y (t) + y(t) Example: Consider the pendulum shown in Fig. 1.6. The differential equation describing the angular displacement is nonlinear: (U nf orced) (F orced) b g =   sin l l b g = sin t   sin l l (1.8) (1.9) where b = 0.5N s/m is the damping factor, l = 1m is the pendulum length, g = 9.81m/s2 is the gravitational acceleration, and are the angular velocity and acceleration of pendulum. The pendulum is given an initial angular displacement of 20 .The force acting on the pendulum is a sinusoidal function of frequency . The responses of the nonlinear system are shown in Figs. 1.71.8. Unforced pendulum: Choose state x = [ ]T . Following the procedure for linearization, we obtain the linear system: x = Ax (1.10) 10 c Perry Y.Li 0.3 0.2 0.1 0.3 0.2 0.1 5 10 15 20 25 30 5 10 15 20 25 30 0.1 0.1 0.2 0.2 Figure 1.7: (left)Nonlinear, unforced; (right)Nonlinear, forced, = 1 rad/s 0.6 0.3 0.4 0.2 0.2 0.1 5 0.2 0.1 0.4 0.2 0.6 10 15 20 25 30 5 10 15 20 25 30 Figure 1.8: (left)Nonlinear, forced, = 15 rad/s g/l rad/s (resonance); (right)Nonlinear, forced, = University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 11 0.4 0.4 0.3 0.3 0.2 0.2 0.1 0.1 0 0 0.1 0.1 0.2 0.2 0.3 0 5 10 15 20 25 30 0.3 0 5 10 15 20 25 30 Figure 1.9: (left)Linear, unforced; (right)Linear, forced, = 1 rad/s 0.8 0.6 0.4 0.2 0.4 0.3 0.2 0.1 0 0 0.2 0.4 0.6 0.8 0.1 0.2 0 5 10 15 20 25 30 0.3 0 5 10 15 20 25 30 Figure 1.10: (left)Linear, forced, = g/l rad/s (resonance); (right)Linear, forced, = 15 rad/s 0 1 g/l b/l where A= Forced pendulum: The state is the same as before. However, we need to include the input: x = Ax + Bu (1.11) where B = [0 1]T and u = sin t. The response of the linearized system is shown in Figs. 1.9  1.10. Comparison of the responses shows that for small perturbations ( 20 ), the linearized system is a good approximation to the nonlinear system. 12 c Perry Y.Li f(x) a b x c d Figure 1.11: Piecewise continuous function 1.4 Linear Differential Systems The most common systems we will study are of the form: x = A(t)x + B(t)u y = C(t)x + D(t)u ( = f (x, u, t) ( = h(x, u, t) ) ) (1.12) (1.13) where x(t) n , u : [0, ) m and y : [0, ) p . A(t), B(t), C(t), D(t) are matrices with compatible dimensions of realvalued functions. This system may have been obtained from the Jacobian linearization of a nonlinear system x = f (x, u, t), y = h(x, u, t) about a pair of nominal input and state trajectories ((t), x(t)). u Assumption 1.4.1 We assume that A(t), B(t) and the input u(t) are piecewise continuous functions of t. A function f (t) is piecewise continuous in t if it is continuous in t except possibly at points of discontinuity (Fig. 1.11). The set of points of discontinuity which contains at most a finite number of points per unit interval. Also, at points of discontinuity, both the left and right limits exist. Remark 1.4.1 Assumption 1.4.1 allows us to claim that given an initial state x(t0 ), and an input function u( ), [t0 , t1 ], the solution, x(t) = s(t, t0 , x(t0 ), u()) = x(t0 ) + t [A( )x( ) + B( )u( )]d
t0 exists and is unique via the Fundamental Theorem of Ordinary Differential Equations (see below) often proved in a course on nonlinear systems. University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 13 Uniqueness and existence of solutions of nonlinear differential equations are not generally guaranteed. Consider the following examples. An example in which existence is a problem: x = 1 + x2 with x(t = 0) = 0. This system has the solution x(t) = tan t which cannot be extended beyond t /2. This phenomenon is called finite escape. The other issue is whether a differential equation can have more than one solution for the same initial condition (nonuniqueness issue). e.g. for the system: x = 3x 3 , 2 x(0) = 0. both x(t) = 0, t 0 and x(t) = t3 are both valid solutions. Theorem 1.4.1 (Fundamental Theorem of ODE's) Consider the following ODE x = f (x, t) where x(t) n , t 0, and f : n + n . On a time interval [t0 , t1 ], if the function f (x, t) satisfies the following: 1. For any x n , the function t f (x, t) is piecewise continuous, 2. There exists a piecewise continuous function, k(t), such that for all x1 , x2 n , f (x1 , t)  f (x2 , t) k(t)x1  x2 . NOTE TO PL: DOES k(t) NEED TO BE BOUNDED? Then, 1. there exists a solution to the differential equation x = f (x, t) for all t, meaning that: a) For n , there is a continuous function : R Rn such that each initial state xo R + (to ) = x0 and (t) = f (, t) t (1.14) 2. Moreover, the solution () is unique. [If 1 () and 2 () have the same properties above, then they must be the same.] Remark 1.4.2 This version of the fundamental theorem of ODE is taken from [Callier and Desoer, 91]. A weaker condition (and result) is that on the interval [t0 , t1 ], the second condition says that there exists a constant k[t0 ,t1 ] such that for any t [t0 , t1 ], the function x f (x, t), for all x1 , x2 Rn , satisfies f (x1 , t)  f (x2 , t) k[t0 ,t1 ] x1  x2 . In this case, the solution exists and is unique on [t0 , t1 ]. The proof of this result can be found in many books on ODE's or dynamic systems usually proved in details in a Nonlinear Systems Analysis class.
2 2 and is e.g. Vidyasagar, Nonlinear Systems Analysis, 2nd Prentice Hall, 93, or Khalil Nonlinear Systems, McMillan, 92 14 c Perry Y.Li 1.4.1 Linearity Property The reason why the system (1.12)(1.13) is called a linear differential system is because it satisfies the linearity property. Theorem 1.4.2 For any pairs of initial and final times t0 , t1 , the state transition map s : (t1 , t0 , x(t0 ), u()) x(t1 ) of the linear differential system (1.12)(1.13) is a linear map of the pair of the initial state x(t0 ) and the input u( ), [t0 , t1 ]. In order words, for any u(), u () m0 ,t1 ] , x, x n and , , [t s(t1 , t0 , (x + x ), (u() + u ())) = s(t1 , t0 , x, u()) + s(t1 , t0 , x , u ()). Similarly, for each pair of t0 , t1 , the mapping : (t1 , t0 , x(t0 ), u()) y(t1 ) from the initial state x(t0 ) and the input u( ), [t0 , t1 ], to the output y(t1 ) is also a linear map, i.e. (t1 , t0 , (x + x ), (u() + u ())) = (t1 , t0 , x, u()) + (t1 , t0 , x , u ()). (1.16) (1.15) Before we prove this theorem, let us point out a very useful principle for proving that two time functions x(t), and x (t), t [t0 , t1 ], are the same. Lemma 1.4.3 Given two time signals, x(t) and x (t). Suppose that x(t) and x (t) satisfy the same differential equation, p = f (t, p) they have the same initial conditions, i.e. x(t0 ) = x (t0 ) The differential equation (1.17) has unique solution on the time interval [t0 , t1 ], then x(t) = x (t) for all t [t0 , t1 ]. Proof: (of Theorem 1.4.2) We shall apply the above lemma (principle) to (1.15). Let t0 be an initial time, (x0 , u()) and (x , u ()) be two pairs of initial state and input, producing 0 state and output trajectories x(t), y(t) and x (t), y (t) respectively. We need to show that if the initial state is x (t0 ) = x0 + x0 , and input u (t) = u(t) + u (t) for all t [t0 , t1 ], then at any time t, the response y (t) is given by the function y (t) := y(t) + y (t). We will first show that for all t t0 , the state trajectory x (t) is given by: x (t) = x(t) + x (t). Denote the RHS of (1.19) by g(t). (1.19) (1.18) (1.17) University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 15 To prove (1.19), we use the fact that (1.12) has unique solutions. Clearly, (1.19) is true at t = t0 , x (t0 ) = x0 + x = x(t0 ) + x (t0 ) = g(t0 ). 0 By definition, if x is a solution to (1.12), x (t) = A(t)x (t) + (u(t) + u (t)). Moreover, g(t) = x(t) + x (t) = [A(t)x(t) + B(t)u(t)] + A(t)x (t) + B(t)u (t) = A(t)[x(t) + x (t)] + [u(t) + u (t)] = A(t)g(t) + [u(t) + u (t)]. Hence, g(t) and x (t) satisfy the same differential equation (1.12). Thus, by the existence and uniqueness property of the linear differential system, the solution is unique for each initial time t0 and initial state. Hence x (t) = g(t). 1.5 Decomposition of the transition map Because of the linearity property, the transition map of the linear differential system (1.12) can be decomposed into two parts: s(t, t0 , x0 , u) = s(t, t0 , x0 , 0u ) + s(t, t0 , 0, u) where 0u denotes the identically zero input function (u( ) = 0 for all ). It is so because we can decompose (x0 , u) n U into (x0 , u) = (x0 , 0u ) + (0, u) and then apply the defining property of a linear dynamical system to this decomposition. Because of this property, the zerostate response (i.e. the response of the system when the inital state is x(t0 ) = 0) satisfies the familiar superposition property: (t, t0 , x = 0, u + u ) = (t, t0 , x = 0, u) + (t, t0 , x = 0, u ). Similarly, the zeroinput response satisfies a superposition property: (t, t0 , x + x , 0u ) = (t, t0 , x, 0u ) + (t, t0 , x , 0u ). 1.6 Zeroinput transition and the Transition Matrix From the proof of linearity of the Linear Differential Equation, we actually showed that the state transition function s(t, t0 , x0 , u) is linear with respect to (x0 , u). In particular, for zero input (u = 0u ), it is linear with respect to the initial state x0 . We call the transition when the input is identically 0, the zeroinput transition. 16 c Perry Y.Li It is easy to show, by choosing x0 to be columns in an identity matrix successively (i.e. invoking the so called 1st representation theorem  Ref: Desoer and Callier, or Chen), that there exists a matrix, (t, t0 ) nn so that s(t, t0 , x0 , 0u ) = (t, t0 )x0 . This matrix function is called the transition matrix. NOTE TO PL: ELABORATION OF 1st REPRESENTATION THEOREM Claim: (t, t0 ) satisfies (1.12). i.e. (t, t0 ) = A(t)(t, t0 ) t and (t0 , t0 ) = I. Proof: Consider an arbitrary initial state x0 n and zero input. By definition, x(t) = (t, t0 )x0 = s(t, t0 , x0 , 0u ). Differentiating the above, (t, t0 )x0 = A(t)(t, t0 )x0 . t Now pick sucessively n different initial conditions x0 = e1 , x0 = e2 , , x0 = en so that {e1 , , en } form a basis of n . (We can take for example, ei to be the i  th column of the identity matrix). Thus, (t, t0 ) e1 e2 en = A(t)(t, t0 ) e1 e2 en t Since e1 e2 en nn is invertible, we multiply both sides by its inverse to obtain the required answer. x(t) = Definition 1.6.1 A n n matrix X(t) that satisfies the system equation, X(t) = A(t)X(t) and X( ) is nonsingular for some , is called a fundamental matrix. Remark 1.6.1 The transition matrix is a fundamental matrix. It is invertible for at least t = t0 . Proposition 1.6.1 Let X(t) nn be a fundamental matrix. Then, X(t) is nonsingular at all t . Proof: Since X(t) is a fundamental matrix, X(t1 ) is nonsingular for some t1 . Suppose that X( ) is singular, then a nonzero vector k n s.t. X( )k = 0, the zero vector in n . Consider now the differential equation: x = A(t)x, x( ) = X( )k = 0 n . x (t) = X(t)k satisfies x (t) = A(t)X(t)k = A(t)x (t), and x ( ) = x( ) = 0. Thus, by the uniqueness of differential equation, x(t) = x (t) for all t. Hence x (t1 ) = 0 extracting a contradiction because X(t1 ) is nonsingular so x (t1 ) = X(t1 )k is nonzero. (1.20) Then, x(t) = 0 for all t is the unique solution to this system with x( ) = 0. However, the function University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 17 1.6.1 Properties of Transition matrices 1. (Existence and Uniqueness) (t, t0 ) exists and is unique for each t, t0 (t t0 is not necessary). 2. (Group property) For all t, t1 , t0 (not necessarily t0 t t1 ) (t, t0 ) = (t, t1 )(t1 , t0 ). Example: For A = 0.8147 0.1270 0.9058 0.9134 , the state transition matrix (t, t0 ) is calculated as (to be derived later): (t, t0 ) = exp[A(t  t0 )] If t0 = 0, t1 = 2, t2 = 5, 6.4051 1.5447 (t1 , t0 ) = 11.0169 7.6055 186.38 74.814 (t2 , t0 ) = 533.59 244.52 18.7195 6.0350 (t2 , t1 ) = 43.0435 23.4097 Now, (t2 , t1 )(t1 , t0 ) = 186.38 74.814 533.59 244.52 = (t2 , t0 ) 3. (Nonsingularity) (t, t0 ) = [(t0 , t)]1 . Example: (t1 , t0 ) = (t0 , t1 ) = 1 6.4051 1.5447 11.0169 7.6055 0.2399 0.0487 0.3476 0.2021 0.2399 0.0487 0.3476 0.2021 (t1 , t0 ) = 4. (Splitting property) If X(t) is any fundamental matrix, then (t, t0 ) = X(t)X 1 (t0 ) 18 Example: X(t) = X(1) = 2.3941t 0.3073t 2.1916t 2.6329t c Perry Y.Li (2, 1) = exp[A(2  (1))] 18.7195 6.0350 = 43.0435 23.4097 6.4051 1.5447 2.3941 0.3073 1 X(2)X (1) = 11.0169 7.6055 2.1916 2.6329 18.7195 6.0350 = 43.0435 23.4097 5. 6. t (t, t0 ) 0.4677 0.0546 0.3893 0.4252 6.4051 1.5447 X(2) = 11.0169 7.6055 = A(t)(t, t0 ), and (t0 , t0 ) = I for each t0 . = (t, t0 )A(t0 ). t0 (t, t0 ) Proof: 1. This comes from the fact that the solution s(t, t0 , x0 , 0u ) exists for all t, t0 and for all x0 n (fundamental theorem of differential equation) and that s(t, t0 , x0 , 0u ) is linear in x0 . 2. Differentiate both sides with respect to t to find that RHS and LHS satisfy the same differential equation and have the same value at t = t1 . So apply existence and uniqueness theorem. 3. From 2), take t = t0 !. 4. Use the existence and uniqueness theorem again : For each t0 , consider both sides as functions d of t. So, LHS(t = t0 ) = RHS(t = t0 ). Now, the LHS satisfies, dt (t, t0 ) = A(t)(t, t0 ). The RHS satisfies: d d [X(t)X(t0 )] = X(t) X(t0 ) = A(t)X(t)X 1 (t0 ) dt dt
d Hence dt RHS(t) = A(t)RHS(t). So LHS and RHS satisfy same differential equation and agree at t = t0 . Hence they must be the same at all t. 5. Already shown. 6. From 3) we have (t, t0 )(t0 , t) = I, the identity matrix. Differentiate both sides with respect to t0 , (t, t0 ) (t0 , t) + (t, t0 ) (t0 , t) = 0 t0 t0 University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 19 d d d since dt0 [X(t0 )Y (t0 )] = dt0 X(t0 )Y (t0 ) + X(t0 ) dt0 Y (t0 ) for X(), Y (t0 ) a matrix of functions of t0 (verify this!). Hence, (t, t0 ) (t0 , t) = (t, t0 ) (t0 , t) t0 t0 (t, t0 ) (t0 , t) = (t, t0 )A(t0 )(t0 , t) t0 (t, t0 ) = (t, t0 )A(t0 )(t0 , t)(t0 , t)1 = (t, t0 )A(t0 ) t0 1.6.2 Explicit formula for (t, t0 ) The PeanoBaker formula is given by:
t t 1 t0 (t, t0 ) = I + A(1 )d1 +
t0 t0 A(2 )d2 d1 1 t + A(1 ) A(2 )
t0 t0 A(1 ) 2 A(3 )d3 d2 d1 + ..... (1.21) t0 This formula can be verified formally (do it) by checking that the RHS(t = t0 ) is indeed I and that the RHS satisfies the differential equation RHS(t, t0 ) = A(t)RHS(t, t0 ). t Let us define the exponential of a matrix, A n+n using the power series, exp(A) = I + A A2 Ak + + + 1 2! k! by mimicking the power series expansion of the exponential function of a real number. The evolution of the state transition matrix when the exponential series is truncated to different number of terms is shown in Fig. 1.12. If A(t) = A is a constant, then the PeanoBaker formula reduces to (prove it!) (t, t0 ) = exp [(t  t0 ) A] For example, examining the 4th term, t A(1 )
t0 3 1 A(2 ) =A t
t0 t t0 =A3 =A3 t0 t0 t 1 t0 t0 1 2 t0 2 t0 A(3 )d3 d2 d1 d3 d2 d1 (2  t0 )d2 d1 (1  t0 )2 (t  t0 )3 d1 = A3 2 3! 20 c Perry Y.Li
1 0.8 0.4 1 2
12 0.3 0.2 0.1 0 1 2 3 4 5 6 11 0.6 0.4 0.2 0 0 3 4 5 6
5 Time(sec) 10 0 5 Time(sec) 10 4 3 2 4 1 2 22 3 4 5 6
0 5 Time(sec) 10 3 2 1 0 1 2 0 1 2 3 4 5 6
5 Time(sec) 10 21 1 0 1 2 Figure 1.12: Evolution of STM for upto six terms; the dotted line is the exact exponential A more general case is that if A(t) and t A( )d , commute for all t, t t0 (t, t0 ) = exp An intermediate step is to show that: 1 d k + 1 dt t A( )d
t0 (1.22) A( )d
0 k+1 = A(t) t A( )d
t0 k = t A( )d
t0 k A(t). Notice that (1.22) does not in general apply unless the matrices commute. i.e. A(t) t A( )d
t0 k = t A( )d
t0 k A(t). t A( )d commute Each of the following situations will guarantee that the condition A(t) and for all t is satisfied: 1. A(t) is constant. t0 2. A(t) = (t)M where (t) is a scalar function, and M nn is a constant matrix; 3. A(t) = i i (t)Mi where {Mi } are constant matrices that commute: Mi Mj = Mj Mi , and i (t) are scalar functions; 4. A(t) has a timeinvariant basis of eigenvectors spanning n . NOTE FOR PL: DO A(t) and t
t0 A( )d need to commute?? University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 21 1.6.3 Computation of (t, 0) = exp(tA) for constant A The definining computational method for (t, t0 ) not necessarily for time varying A(t) is: (t, t0 ) = A(t)(t, t0 ); t (t0 , t0 ) = I When A(t) = constant, there are algebraic approaches available: (t1 , t0 ) = exp(A(t1  t0 )) := I + A(t1  t0 ) A2 (t1  t0 )2 + + ... 1! 2! Matlab provides expm(A (t1  t0 )). Example : >> t0 = 2; >> t1 = 5; >> A = [0.8147 0.1270; 0.9058 0.9134]; >> expm(A (t1  t0 )) ans = 18.7195 6.0350 43.0435 23.4097 Laplace transform approach L((t, 0)) = [sI  A]1 Proof: Since (0, 0) = I, and (t, 0) = A(t)(t, 0) t take Laplace transform of the (Matrix) ODE: ^ ^ s(s, 0)  (0, 0) = A(s, 0) ^ where (s, 0) is the Laplace transform of (t, 0) when treated as function of t. This gives: ^ (sI  A)(s, 0) = I Example A= 1 0 1 2 (sI  A) 1 1 s+2 0 = = 1 s+1 (s + 1)(s + 2) 1/(s + 1) 0 = 1/(s + 1)(s + 2) 1/(s + 2) s+1 0 1 s + 2 1 Taking the inverse Laplace transform (using e.g. a table) term by term, (t, 0) = et 0 t  e2t e2t e 22 Similarity Transform (decoupled system) approach Let A nn , if v C n and C satisfy A v = v c Perry Y.Li then, v is an eigenvector and its associated eigenvalue. If A nn has n distinct eigenvalues 1 , 2 , . . . n , then A is called simple. If A nn has n independent eigenvectors then A is called semisimple (Notice that A is necessarily semisimple if A is simple). Suppose A nn is simple, then let T = v1 v2 . . . v n ; = diag(1 , 2 , . . . , n ) be the collection of eigenvectors, and the associated eigenvalues. Then, the socalled similarity transform is: A = T T 1 (1.23) Remark A sufficient condition for A being semisimple is if A has n distinct eigenvalues (i.e. it is simple); When A is not semisimple, a similar decomposition as (1.23) is available except that will be in Jordan form (has 1's in the super diagonals) and T consists of eigenvectors and generalized eigenvectors). This topic is covered in most Linear Systems or linear algebra textbook Now exp(At) := I + Notice that Ak = T k T 1 , thus, t 2 t2 exp(tA) = T I + + + . . . T 1 = T exp(t)T 1 1! 2! The above formula is valid even if A is not semisimple. In the semisimple case, k = diag k , k , . . . , k 1 2 n so that exp(t) = diag [exp(1 t), exp(2 t), . . . , exp(n t)] T w1 w T 2 T = . where wi is the ith row of T 1 , then, . .
T wn At A2 t2 + + ... 1! 2! If we write T 1 exp(tA) = T exp(t)T 1 = n i=1 T exp(i t)vi wi . University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 23 T T This is the dyadic expansion of exp(tA). Can you show that wi A = i wi ? This means that wi are the left eigenvectors of A. The expansion shows that the system has been decomposed into a set of simple, decoupled 1st order systems. Example 1 0 A= 1 2 The eigenvalues and eigenvectors are: 1 1 = 1, v1 = ; 1 Thus, exp(At) = 1 0 1 1 et 0 2t 0 e 1 0 1 1 = et 0 , et  e2t e2t 0 2 = 2, v2 = . 1 same as by the Laplace transform method. Digression: Modal decomposition The (generalized 3 ) eigenvectors are good basis for a coordinate system. Suppose that A = T T 1 is the similarity transform. Let x n , z = T 1 x, and x = T z, Thus, x = z 1 v 1 + z 2 v 2 + . . . zn v n . where T = [v1 , v2 , . . . , vn ] nn . Hence, x is decomposed into the components in the direction of the eigenvectors with zi being the scaling for the ith eigenvector. The original linear differential equation is written as: x = Ax + Bu T z = T z + Bu z = z + T 1 Bu If we denote B = T 1 B, then, since is diagonal, zi = i zi + B i u where zi is the ith component of z and Bi is the i  th row of B. This is a set of n decoupled first order differential equations that can be analyzed independently. Dynamics in each eigenvector direction is called a mode. If desired, z can be used to reconstitute x via x = T z. Example: Consider the system 2 5 0 x= x+ u (1.24) 1 3 1
3 for the case the A is not semisimple, additional "nice" vectors are needed to form a basis, they are called generalized eigenvectors 24 c Perry Y.Li 0.9414 0.8732 Eigenvalues: 1 = 0.2087, 2 = 4.7913. Eigenvectors: v1 = , v2 = . 0.3373 0.4874 0.9414 0.8732 T = (v1 v2 ) = 0.3373 0.4874 0.6470 1.1590 1 T = 0.4477 1.2496 1 0 = 0 2 A = T T 1 z = T 1 x B = T 1 B 1.1590 = 1.2496 z = z + Bu z1 = (0.2087)z1 + (1.1590)u z2 = (4.7913)z2 + (1.2496)u Thus, the system equations obtained after modal decomposition are completely decoupled. 1.7 ZeroState transition and response
x(t) = A(t)x(t) + B(t)u(t) x(t) n , u(t) m Recall that for a linear differential system, the state transition map can be decomposed into the zeroinput response and the zerostate response: s(t, t0 , x0 , u) = (t, t0 )x0 + s(t, t0 , 0x , u) Having figured out the zeroinput state component, we now derive the zerostate response. 1.7.1 Heuristic guess We first decompose the inputs into piecewise continuous parts {ui : m } for i = , 2, 1, 1, 0, 1, , u(t0 + h i) t0 + h i t < t0 + h (i + 1) ui (t) = 0 otherwise where h > 0 is a small positive number (Fig. 1.13). Intuitively we can see that as h 0, u(t) = Let u(t) = i= ui (t). i= ui (t) as h 0. Now we figure out s(t, t0 , 0, ui ). Refer Fig. 1.14. By linearity of the transition map, s(t, t0 , 0, u) = s(t, t0 , 0, ui ).
i University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 25 u(h) u(t) h t to to + h.i to + h.(i+1) Figure 1.13: Decomposing inputs into piecewise continuous parts text1 text x(t) h text4 t to text2 text3 Figure 1.14: Zerostate response and transition 26 Step 1: t0 t < t0 + h i. c Perry Y.Li Since u( ) = 0, [t0 , t0 + h i) and x(t0 ) = 0, x(t) = 0 for t0 t < t0 + h i Step 2: t [t0 + h i, t0 + h(i + 1)). Input is active: x(t) x(t0 + h i) + [A(t0 + h i) + B(t0 + h i)u(t0 + h i)] T = [B(t0 + h i)u(t0 + h i)] T where T = t  t0 + h i. Step 3: t t0 + h (i + 1). Input is no longer active, ui (t) = 0. So the state is again given by the zeroinput transition map: (t, t0 + h (i + 1)) B(t0 + i h)u(t0 + h i) x(t0 +(i+1)h) Since (t, t0 ) is continuous, if we make the approximation (t, t0 + (h + 1)i) (t, t0 + h i) we only introduce second order error in h. Hence, s(t, t0 , x0 , ui ) (t, t0 + h i)B(t0 + h i)u(t0 + h i). The total zerostate state transition due to the input u() is therefore given by:
(tt0 )/h s(t, t0 , 0, u) i=0 (t, t0 + h i) B(t0 + i h)u(t0 + h i) As h 0, the sum becomes an integral so that: t s(t, t0 , 0, u) = (t, )B( )u( )d.
t0 (1.25) In this heuristic derivation, we can see that (t, )B( )u( ) is the contribution to the state x(t) due to the input u( ) for < t. 1.7.2 Formal Proof of zerostate transition map We will show that for all t, t0 + , (x(t) = s(t, t0 , 0x , u)) = t (t, )B( )u( )d.
t0 (1.26) Clearly (1.26) is correct for t = t0 . We will now show that the LHS of (1.26), i.e. x(t) and the RHS of (1.26), which we will denote by z(t) satisfy the same differential equation. University of Minnesota ME 8281: Advanced Control Systems Design, 20012012
t t 27 to t to t Figure 1.15: Changing the order of integral in the proof of zerostate transition function We know that x(t) satisfies, x(t) = A(t)x(t) + B(t)u(t). Observe first that since
d dt (t, t0 ) = A(t)(t, t0 ), t (t, ) = I + A()(, )d. Now for the RHS of (1.26) which we will call z(t), z(t) := (t, )B( )u( )d t t = B( )u( )d + A()(, )d B( )u( )d
t0 t t0 t0 t let f (, ) := A()(, )B( )u( ) and then changing the order of the integral (see Fig. 1.15) = = = t B( )u( )d + B( )u( )d + B( )u( )d + t0 t t0 t t0 t
t0 t t0 t t0 f (, )d d
t0 A() (, )B( )u( )d d
t0 A()z()d which on differentiation w.r.t. t gives z(t) = B(t)u(t) + A(t)z(t). Hence, both z(t) and x(t) satisfy the same differential equation and have the same values at t = t0 . Therefore, x(t) = z(t) for all t. For a linear timeinvariant system, A and B are constant matrices, and hence the zerostate transition map reduces to: t x(t) = exp(A(  t0 ))Bu( )d (1.27)
t0 28 c Perry Y.Li 1.7.3 Output Response function The combined effect of initial state x0 and input function u() on the state is given by: t0 s(t, t0 , x0 , u) = (t, t0 )x0 + (t, )B( )u( )d.
t Deriving the output response is simple since: y(t) = C(t)x(t) + D(t)u(t). where C(t) pn , D(t) pm . Hence, the output response map y(t) = (t, t0 , x0 , u) is simply t y(t) = (t, t0 , x0 , u) = C(t)(t, t0 )x0 + C(t) (t, )B( )u( )d + D(t)u(t). (1.28)
t0 1.7.4 Impulse Response Matrix When the initial state is 0, let the input be an Dirac impulse occurring at time in the jth input, uj (t) = j (t  ) 0 . . . 0 j = 1 jth row . The output response is: where j is the jth unit vector 0 . . . 0 yj (t) = [C(t)(t, )B( ) + D(t)(t  )]j [C(t)(t, )B( ) + D(t)(t  )] 0 The matrix H(t, ) = t t< (1.29) is called the Impulse response matrix. The jth column of H(t, ) signifies the output response of the system when an impulse is applied at input channel j at time . The reason why it must be zero for t < is because the impulse would have no effect on the system before it is applied. Because u(t) H(t, )u( )d , we can also see that intuitively, t y(t) = H(t, )u( )d
t0 if the state x(t0 ) = 0. This agrees with (1.28). This integral is the convolution integral of the impulse response matrix (H(t, )) and the input signal (u( )). If f (t) and g(t) are two functions, then the convolution f g is given by: f g = f (t  )g( )d = f ( )g(t  )d See Fig. 1.16. University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 29 g(t) f(t) g( ) f( )  g( ) f( t  ) t Figure 1.16: Convolution of two functions 30 c Perry Y.Li 1.8 Linear discrete time system response Discrete time systems are described by difference equation. For any (possibly nonlinear) difference equation: x(k + 1) = f (k, x(k), u(k)) with initial condition x(k0 ), the solution x(k k0 ) exists and is unique as long as f (k, x(k), u(k)) is a properly defined function (i.e. f (k, x, u) is well defined for given k, x, u.) This can be shown by just recursively applying the difference equation forward in time. The existence of the solution backwards in time is not guaranteed, however. The linear discrete time system is given by the difference equation: x(k + 1) = A(k)x(k) + B(k)u(k); with initial condition given by x(k0 ) = x0 . Many properties of linear discrete time systems are similar to the linear differential (continuous time) systems. We now study some of these, and point out some differences. Let the transition map be given by s(k1 , k0 , x0 , u()) where x(k1 ) = s(k1 , k0 , x0 , u()). Linearity: For any , R, and for any two initial states xa , xb and two input signals ua () and ub (), s(k1 , k0 , xa + xb , ua () + ub ()) = s(k1 , k0 , xa , ua ()) + s(k1 , k0 , xb , ub ()) As corollaries, we have: 1. Decomposition into zeroinput response and zerostate response: s(k1 , k0 , x0 , u()) = s(k1 , k0 , x0 , 0u ) + s(k1 , k0 , 0x , u()) zeroinput zeroinitialstate 2. Zero input response can be expressed in terms of a transition matrix, x(k) := s(k1 , k0 , x0 , 0u ) = (k, k0 )x0 1.8.1 (k, k0 ) properties
(k, k0 ) = k1 0 A(k ) k =k The discrete time transition function can be explicitly written as: The discrete time transition matrix has many similar properties as the continuous one, particularly in the case of A(k) is invertible for all k. The main difference results from the possibility that A(k) may not be invertible. 1. Existence and uniqueness: (k1 , k0 ) exists and is unique for all k1 k0 . If k1 < k0 , then (k1 , k0 ) exists and is unique if and only if A(k) is invertible for all k0 > k k1 . 2. Existence of inverse: If k1 k0 , (k1 , k0 )1 exists and is given by (k1 , k0 )1 = A(k0 )1 A(k0 + 1)1 A(k1  1)1 if and only if A(k) is invertible for all k1 > k k0 . University of Minnesota ME 8281: Advanced Control Systems Design, 20012012 3. Semigroup property: (k2 , k0 ) = (k2 , k1 )(k1 , k0 ) for k2 k1 k0 only, unless A(k) is invertible. 4. Matrix difference equations: (k + 1, k0 ) = A(k)(k, k0 ) (k1 , k  1) = (k1 , k)A(k  1) 31 Can you formulate a property for discrete time transition matrices similar to the splitting property for continuous time case? 1.8.2 Zeroinitial state response The zeroinitial initial state response can be obtained easily x(k) = A(k  1)x(k  1) + B(k  1)u(k  1) . . . = A(k  1)A(k  2)x(k  2) + A(k  1)B(k  2)u(k  2) + B(k  1)u(k  1)
k1 = A(k  1)A(k  2) . . . A(k0 )x(k0 ) + k1 A(j)B(i)u(i) j=i+1 i=k0 Thus, since x(k0 ) = 0 for the the zeroinitial state response: s(k, k0 , 0x , u) =
k1 k1 k1 A(j)B(i)u(i) j=i+1 (k, i + 1)B(i)u(i) i=k0 = i=k0 1.8.3 Discretization of continuous time system Consider a continuous time LTI system: x = Ax x(t0 ) = x0 (1.30) The state xD (k) of the equivalent discretized system is xD (k) = x(kT ) where T is the sampling time. xD (k + 1) = ((k + 1)T, kT )xD (k) = exp(T A)xD (k) xD (k + 2) = exp(T A)xD (k + 1) = exp(T A)2 xD (k) 32 Therefore, for the interval [k0 , k], xD (k) = exp(T A)kk0 xD (k0 ) Hence, the dicretetime state transition matrix is given as: D (k, k0 ) = exp(T A)kk0 = (kT, k0 T ) c Perry Y.Li (1.31) (1.32) (1.33) ...
View
Full
Document
This note was uploaded on 02/07/2012 for the course ME 8281 taught by Professor Staff during the Fall '08 term at Minnesota.
 Fall '08
 Staff

Click to edit the document details