This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: MAE107 Introduction of Modeling and Analysis of Dynamic Systems Lecture Notes #2 Prof. M’Closkey • Solution of ﬁrst order linear differential equations • Superposition • Timeinvariance • Convolution representation • Transfer functions and frequency response functions Textbook reading: 1. Sections 6.1 thru 6.6. Section 6.1 motivates ﬁrst order ODEs using examples drawn from thermal, electric, electromechanical, and mechanical systems; Section 6.2 discusses solutions to the homogeneous ODE; Section 6.3 discusses particular solutions and initial value problems; Section 6.4 discusses linearity and superposition. 2. Section 7.2 discusses complex numbers. Solving First Order Linear ODEs Given the ODE x(t) = ax(t) + bu(t), ˙ (1) where u is deﬁned on some interval [t0 , t1 ] and a and b are real constants, we label two classes of functions that are associated with this ODE: 1. Homogeneous solution. Recall that the homogeneous differential equation is the one in which u(t) ≡ 0: x(t) = ax(t). ˙ A nonzero solution to the homogeneous problem, denoted xh , satisﬁes xh (t) = axh (t). ˙ Note that homogeneous solutions are not unique: a scalar multiple of xh , say αxh for some constant α, is another homogeneous solution since d (αxh (t)) = αxh (t) = αaxh (t) = a(αxh (t)). ˙ dt Furthermore, the weighted sum of two homogeneous solutions is another homogeneous solution: let x1 and x2 be two homogeneous solutions, i.e. x1 = ax1 ˙ and x2 = ax2 , then α1 x1 + α2 x2 , where α1 and α2 are any constants, is also a ˙ homogeneous solution since d d d (α1 x1 (t) + α2 x2 (t)) = (α1 x1 (t)) + (α2 x2 (t)) dt dt dt = α1 x1 (t) + α2 x2 (t) ˙ ˙ = α1 ax1 (t) + α2 ax2 (t) = a (α1 x1 (t) + α2 x2 (t)) . For ﬁrstorder differential equations, a fundamental solution is a homogeneous solution in which xh (t0 ) = 1. In fact, for ﬁrstorder constant coefﬁcient differential equations all homogeneous solutions are of the form αeat for any nonzero constant α.
1 2. Particular solution. For the nonhomogeneous ODE, a particular solution, denoted xp , is any function that satisﬁes (1) (includes the nonhomogeneous term bu(t)). In other words xp (t) = axp (t) + bu(t), for all t ∈ [t0 , t1 ]. ˙ Note that particular solutions are not unique: adding a homogeneous solution to a particular solution yields another particular solution. We can now state the initial value problem for (1): Initial Value Problem (IVP). Let an initial condition x0 be speciﬁed at time t0 (in other words, x(t0 ) = x0 is speciﬁed) and the forcing function u is known for time in the interval t ∈ [t0 , t1 ] (note that the beginning of the time interval coincides with the time when the initial condition is speciﬁed). Then, ﬁnd x over the same time interval. Solution of the IVP. The solution of the IVP is unique and may be found by several different methods. 1. Method 1: Ad hoc approach. This method is called ad hoc because you must ﬁrst produce any particular solution, then, add to it a homogeneous solution with “free” parameter α, i.e., x(t) = xp (t) + αxh (t), where you adjust the constant α such that x(t0 ) matches the initial condition, i.e. x(t0 ) = xp (t0 ) + αxh (t0 ) = x0 . The difﬁculty is computing the particular solution. There are certain classes of nonhomogeneous terms, however, for which it is very easy to compute a particular solution. These will be discussed shortly. 2. Method 2: Integrating Factor. The homogeneous problem x = ax has as its ˙ fundamental solution, denoted xf , the following function xf (t) = ea(t−t0 ) , The integrating factor is a function v such that v (t)xf (t) = 1, It is easily seen that the integrating factor is v (t) = ea(−t+t0 ) ,
2 t ≥ t0 . t ≥ t0 . t ≥ t0 . Multiplying the ODE by the integrating factor yields d (x(t))ea(−t+t0 ) − ax(t)ea(−t+t0 ) = ea(−t+t0 ) bu(t) dt d a(−t+t0 ) ) dt (x(t)e Integrating both sides:
t t0 d x(τ )ea(−τ +t0 ) dτ = dτ
t t0 t t0 ea(−τ +t0 ) bu(τ )dτ =⇒ x(t)ea(−t+t0 ) − x(t0 ) = =⇒ x(t) = ea(t−t0 ) x0 +
t t0 ea(−τ +t0 ) bu(τ )dτ ea(t−τ ) bu(τ )dτ This way of expressing the solution is convenient since it separately shows the contributions due to the initial condition versus the contribution due to the “input” u. Method 3: Variation of Parameters. Variation of parameters uses the fundamental solution as a “basis” for generating a particular solution that satisﬁes the nonhomogeneous differential equation: xp (t) = c(t)ea(t−t0 ) , t ≥ t0 , where c is to be determined by substitution into the nonhomogeneous ODE: c(t)ea(t−t0 ) + ac(t)ea(t−t0 ) = ac(t)ea(t−t0 ) + bu(t) ˙ =⇒ c(t) = ea(−t+t0 ) bu(t) ˙
t =⇒ c(t) − c(t0 ) = ea(−τ +t0 ) bu(τ )dτ. t0 The initial value c(t0 ) can be taken to be zero without loss of generality because if it is not zero then it just generates a homogeneous solution that will be compensated by the addition of another homogeneous solution with a free parameter in order to match the initial condition of the IVP. Thus,
t c(t) =
t0 ea(−τ +t0 ) bu(τ )dτ, t ∈ [t0 , t1 ], 3 and so xp (t) = c(t)ea(t−t0 )
t =
t0 ea(t−τ ) bu(τ )dτ, t ∈ [t0 , t1 ]. Since xp (t0 ) = 0, the homogeneous term that we need to add to compute the IVP solution is xh (t) = ea(t−t0 ) x0 . Thus, adding these particular and homogeneous solutions yields the solution to the IVP, i.e. the solution satisﬁes the ODE and the initial condition, x(t) = xh (t) + xp (t) =e
a(t−t0 ) t x0 +
t0 ea(t−τ ) bu(τ )dτ, t ∈ [t0 , t1 ]. (2) Note that this is exactly the same expression obtained with the integrating factor method. Important comment: This is the unique solution to the IVP, however, it has a particularly useful form because the portion of the solution due to a nonzero initial condition is separate from the part of the response caused by a nonzero input. 4 Linearity and Superposition Analyzing the solution (2) of the IVP gives us the correct insight into the concept of “superposition”. Suppose an initial condition at t0 is speciﬁed as x0,1 and the input u1 is deﬁned on the interval [t0 , t1 ]. Then, the solution to the IVP, denoted x1 , is x1 (t) = e
a(t−t0 ) t x0,1 +
t0 ea(t−τ ) bu1 (τ )dτ, t ∈ [t0 , t1 ]. (3) If another initial condition x0,2 at t0 is given, along with another input u2 on the interval [t0 , t1 ], then the solution to this initial value problem, denoted x2 , is x2 (t) = e A fair question to ask is “to what initial value problem is x1 (t) + x2 (t), t ∈ [t0 , t1 ], the solution?” By adding (3) and (4) this question is answered: x1 (t) + x2 (t) = e
a(t−t0 ) t t a(t−t0 ) t x0,2 +
t0 ea(t−τ ) bu2 (τ )dτ, t ∈ [t0 , t1 ]. (4) x0,1 +
t0 e a(t−τ ) bu1 (τ )dτ + e
t t0 a(t−t0 ) x0,2 +
t0 ea(t−τ ) bu2 (τ )dτ = ea(t−t0 ) (x0,1 + x0,2 ) +
initial condition ea(t−τ ) b(u1 (τ ) + u2 (τ ))dτ
input Thus, the initial condition x0,1 + x0,2 with input u1 + u2 produces the solution x1 + x2 . In other words, for differential equations, it is important to sum inputs and sum initial conditions. Example. Consider the following lowpass ﬁlter circuit equations when RC = 0.1, ˙ Vout = −10Vout + 10Vin . For purposes of standardizing notation we will use the form x = −10x + 10u. ˙
5 Let t0 = 0 and x0,1 = 1 volt (this initial conditioning is determined by the charge on the capacitor at t = 0). The input is u1 (t) = sin(10t), t ≥ 0. The solution to the IVP, denoted x1 , is shown below where it has been partitioned according to (2).
IVP with u1 and x0,1
2 IVP sol. solution with IC=x0,1, u=0 IC=0, non zero u input u 1.5 1 x1 = −10x1 + 10u1 ˙ =⇒ x1 (0) = x0,1 = 1 u1 (t) = sin(10t), t ≥ 0 x1 (volts) 0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 seconds Now consider a second IVP in which t0 = 0, x0,2 = −0.25 volts and u2 (t) = 1, t ≥ 0. This solution is denoted x2 ,
IVP with u2 and x0,2
2 IVP sol. solution with IC=x0,2, u=0 1.5 solutoin with IC=0, non zero u input u 1 x2 = −10x2 + 10u2 ˙ x2 (0) = x0,2 = −0.25 =⇒ u2 (t) = 1, t ≥ 0 x2 (volts) 0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 seconds 6 It’s clear from these ﬁgures that the IVP that generates the solution x1 + x2 (the sum of the blue traces in the two preceding ﬁgures) has initial condition x0,1 + x0,2 and input u1 + u2 as shown below,
IVP with u +u and x
1 2
2 IVP sol. solution when IC=x0,1+x0,2, u=0 solution when IC=0, u = u1 + u2 1.5 input u 0,1 +x 0,2 1 x = −10x + 10u ˙ =⇒ x(0) = x0,1 + x0,2 = 0.75 u(t) = 1 + sin(10t), t ≥ 0 x (volts) 0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 seconds ♣ 7 Timeinvariance property The consequence of timeinvariance (that is, a and b are constant in (1)) is the following: suppose an initial condition is speciﬁed at t0 and the input u is speciﬁed for t ≥ t0 ˜ and let x be the subsequent “output”, deﬁned for t ≥ t0 , that is obtained by solving the ˜ IVP; if the same initial condition is shifted to time t0 + T , for any T , and if the new input u is shifted by the same amount, then the solution to the IVP is just x shifted by T . ˜ This is shown more rigorously using (2). Let the initial condition be speciﬁed at “starting time” t0 as x0 , and let the input, denoted u, be deﬁned for all t ≥ t0 . Then, the solution to the IVP, denoted x, is ˜ ˜ x(t) = ea(t−t0 ) x0 + ˜
t ea(t−τ ) bu(τ )dτ, ˜ t0 t ≥ t0 . Now let’s shift the starting time to t0 + T , where T is any desired value. The input, denoted u, for this new initial value problem is just u shifted by T , too: ˜ u(t) = u(t − T ), ˜ The solution to this IVP is x(t) = e
a(t−(t0 +T ) t t ≥ t0 + T. x0 + = ea(t−(t0 +T ) x0 + = ea((t−T )−t0 ) x0 + = x(t − T ), ˜ t 0 +T t ea(t−τ ) bu(τ )dτ, t ≥ t0 + T, t ≥ t0 + T, t ≥ t0 + T, t 0 +T t−T t0 ea(t−τ ) bu(τ − T )dτ, ˜ ea((t−T )−s) bu(s)ds, ˜ t ≥ t0 + T, which demonstrates that shifting the initial condition and input by T just shifts the solution by T . The systems tested in the lab portion of this class are wellmodeled as timeinvariant systems because applying the same input over the course of the entire quarter yields essentially the same response.
8 For timeinvariant systems we make take t0 = 0 without loss of generality. Thus, t0 = 0 will be the default “starting time” that we assume from now on. Example. Consider the lowpass ﬁlter circuit used in the superposition example, x = −10x + 10u. ˙ Suppose the starting time is t0 = 0.5 seconds and x0 = −0.25 and u(t) = 1, t ≥ 0.5. Then the solution to this IVP is,
IVP solution
2 IVP sol. solution when IC= 0.25, u=0 solution when IC=0, u = 1 input u 1.5 1 x = −10x + 10u ˙ x(0.5) = −0.25 =⇒ u(t) = 1, t ≥ 0.5 x (volts) 0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 seconds Note that this it the solution of a previously studied IVP that is just shifted to the new “starting time”. This property does not hold if the coefﬁcients in the ODE, i.e. a and b, are functions of time. ♣ 9 Terminology “Zerostate response” vs. “Free response” The solution to the initial value problem is easily seen to be the sum of the response to the input u when the initial condition is zero with the response of the system due to its nonzero initial condition when the input is zero. These “pieces” of the solution have special names as shown below (the IVP solution is written with our standing assumption that t0 = 0): x(t) = e x0 +
“free response” at t 0 ea(t−τ ) bu(τ )dτ , t ≥ 0. “zerostate response” The term “zerostate response” comes from the fact that this is the system’s response when u is applied to the system when it is initially in an equilibrium, or rest, state. There are other ways to parse the solution to the IVP and in doing so we will be able to deﬁne the terms “forced motion”, “natural motion”, “steadystate response”, and “transient response.” 10 Another view of the ﬁrst order ODE The integral in the solution we derived for the IVP is called a convolution of the functions eat and u(t), x(t) = e
a(t−t0 ) t x0 +
t0 ea(t−τ ) bu(τ )dτ ,
convolution t ≥ t0 . (5) The lower limit of integration corresponds to the time at which the initial condition is speciﬁed, i.e. t0 . You may wonder how the independent variable got to that value at t0 in the ﬁrst place. The answer is, of course, “with an input” that is acting for t < t0 . Thus, the lower limit of integration can be extended to −∞ by assuming that far in the past the system was initially at rest and that the value of x(t) is due to u acting on the interval(−∞, t]. Thus, we no longer need to specify an initial condition at t0 because the input takes care of it. Thus, (5) can be replaced by
t x(t) =
−∞ ea(t−τ ) bu(τ )dτ. (6) It is easy to show that (5) can be recovered from (6),
t x(t) =
−∞ t0 ea(t−τ ) bu(τ )dτ e
a(t−τ ) t0 t =
−∞ bu(τ )dτ +
t0 ea(t−τ ) bu(τ )dτ
t (assume t > t0 ) =e a(t−t0 ) e
−∞ a(t0 −τ ) x(t0 ) bu(τ )dτ +
t0 ea(t−τ ) bu(τ )dτ so, thus, (6) is just another representation of the solution of the IVP, albeit with the input necessarily deﬁned for all t. Example 1. Consider the circuit example again (ODE is x = −10x + 10u) with the ˙ following input deﬁned on the interval (−∞, ∞), u(t) = −0.325e2t 1
11 t<0 t≥0 This input is shown in the ﬁgure below,
Input
2 1.5 1 Vin (volts) 0.5 0 0.5 1 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 seconds When this input is applied to the system, the following response is observed,
Output
2 1.5 1 Vout (volts) 0.5 0 0.5 1 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 seconds Note that for t ≥ 0, u and x match one of the previous examples we considered when the IC was speciﬁed as x0 = −0.25, however, in the present example, we did not specify an initial condition but rather expanded the domain of deﬁnition of the input. ♣
12 To complete this section we can consider one ﬁnal modiﬁcation of (6), namely, if we deﬁne the function h on the interval (−∞, ∞) h(t) = 0 eat b t<0 , t≥0 (7) then the upper limit of integration in (6) can be replaced with ∞, x(t) =
∞ −∞ h(t − τ )u(τ )dτ. (8) Note that (6) can be recovered from (8), x(t) = =
−∞ t ∞ −∞ t h(t − τ )u(τ )dτ h(t − τ ) u(τ )dτ +
=ea(t−τ ) ∞ t h(t − τ ) u(τ )dτ
=0 =
−∞ ea(t−τ ) u(τ )dτ This representation of the solution has an appealing simplicity and only requires that we • extend the domain of deﬁnition of u to (−∞, ∞), • deﬁne h according to (7) We will see that h plays a fundamental role in the study of linear systems and is the response of the system to a unit impulse function and, hence, it is called the system’s impulse response. In fact, it is obvious from (8) that knowledge of h permits us to compute x in response to any input via convolution of h and u. Remark. It is simple to show x(t) =
∞ −∞ ∞ −∞ h(t − τ )u(τ )dτ = h(τ )u(t − τ )dτ. (9) The calculation is left as an exercise. 13 Revisiting Linearity When working with (8), the property of superposition is transparent. For example, consider u1 deﬁned on (−∞, ∞). Let the solution (response) of the ﬁrst order system to this input be denoted x1 x1 (t) =
∞ −∞ h(t − τ )u1 (τ )dτ. If another input, denoted u2 and also deﬁned on the time interval (−∞, ∞), is applied to the same system, then its response, denoted x2 , is x2 (t) =
∞ −∞ h(t − τ )u2 (τ )dτ. If the input u1 + u2 is applied to this system, then its response, denoted x, is given by x1 + x2 , x(t) = =
∞ −∞ ∞ −∞ h(t − τ )(u1 (τ ) + u2 (τ ))dτ h(t − τ )u1 (τ )dτ +
∞ −∞ h(t − τ )u2 (τ )dτ = x1 (t) + x2 (t). Note that we didn’t worry about initial conditions because the inputs are deﬁned on the interval (−∞, ∞). Revisiting Time Invariance Recall that shifting the “starting time” t0 in the IVP (the initial condition remains the same but is now speciﬁed at a different starting time and the input is also shifted in time by the same amount) meant that we just shifted the corresponding solution by the same amount of time. This property can be easily shown with the convolution representation of the solution. Suppose u is deﬁned on the interval (−∞, ∞) and x is the corresponding solution, i.e. x(t) =
∞ −∞ h(t − τ )u(τ )dτ. Now apply the time shifted input u(t − T ), where T is the amount of the time shift (can 14 be positive or negative). Let the solution with this input be denoted x, ˜ x(t) = ˜ =
∞ −∞ ∞ −∞ h(t − τ )u(τ − T )dτ h(ν )u(t − T − ν )dν (set ν = t − τ ) In other words, the solution is also shifted by T . Example. Consider the lowpass circuit given by x = −10x + 10u. The impulse ˙ response, h, of this system is h(t) = 0 10e−10t t<0 . t≥0 = x(t − T ) (using property (9)). Note that the ODE and h are uniquely deﬁned by the values of a and b, which are −10 and 10, respectively, in this example. Knowledge of h permits us to compute the response of the system to any input via (8). It is equally valid to graph h instead of exhibiting an explicit formula. In fact, in order to derive a mathematical model of a linear system from test data, the ﬁrst step is typically to estimate the impulse response from the test data! The graph of the impulse response for this example is shown below,
Impulse response, h
10 8 Vout (volts) 6 4 2 0 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 seconds 15 Yet another view of the ﬁrst order ODE Suppose the input has “exponential form”: u(t) = est where s may be a complex number (this is denoted by s ∈ C). Although it may seem confusing as to why we would allow an input to take on complex values and, furthermore, question its physical signiﬁcance when the engineering quantities we work with are realvalued, it is very convenient to work with complex algebra when studying linear systems even when we are interested in solutions for problems in which all of the data are real. A wide variety of functions can be represented by est where s ∈ C: • constant: s = 0 =⇒ R(est ) = 1 for all t • sinusoid with frequency ω , phase φ and amplitude α > 0: s = jω , β ∈ C such that β  = α and β = φ =⇒ R(βest ) = α cos(ωt + φ) for all t • damped sinusoid with frequency ω and decay rate σ < 0: s = σ + jω =⇒ R(est ) = eσt cos(ωt) for all t The notation R(·) means “take the real part of its argument”. Now let’s ﬁnd a particular solution also of exponential form xp (t) = Hest , where H is a (possibly complex) constant that is determined by substitution into the nonhomogeneous ODE: d H est = aHest + best dt =⇒ =⇒ =⇒ Hsest = aHest + best Hs = aH + b H (s − a) = b. b st e. s−a (because est = 0 for all t) If we further assume s = a then H = b/(s − a) and hence a particular solution is xp (t) = Hest = When s = a then the assumed particular solution cannot satisfy the ODE so you must continue your search (a particular solution of the form teat works). We can consider H to be a function of s deﬁned by H (s) =
16 b , s−a where s ∈ C but s = a. This function is called the transfer function of the system. Example. Consider the lowpass ﬁlter ODE x = −10x + 10u. The transfer function of ˙ this system is 10 H (s) = . s + 10 Suppose the input is deﬁned to be u(t) = e−2t cos(5t). This function can be represented as an exponential with complex constant s = −2 + j 5: u(t) = R e(−2+j 5)t . If the function e(−2+j 5)t is applied to the system, then the particular solution that is also a scaled version of this input is (the scaling constant is the transfer function evaluated at s = −2 + j 5), xp (t) = H (−2 + j 5)e(−2+j 5)t = 10 e(−2+j 5)t . −2 + j 5 + 10 This particular solution satisﬁes the ODE when the forcing function is given by est , not the actual input u(t) = R(est ). The particular solution can be split into its real and imaginary parts, i.e. xp (t) = R(xp (t)) + I (xp (t)) and substituted into the ODE: d (R(xp (t)) + I (xp (t))) = −10 (R(xp (t)) + I (xp (t))) + 10e(−2+j 5)t dt d R(xp (t)) = −10R(xp (t)) + 10 R e(−2+j 5)t =⇒ dt
u d I (xp (t)) = −10I (xp (t)) + 10I e(−2+j 5)t dt The middle equation shows that R(xp ) is a particular solution associated with the actual input u, d R(xp (t)) = −10R(xp (t)) + 10e−2t cos(5t). dt Note that the imaginary part of xp also satisﬁes the ODE when the input is I e(−2+j 5)t . A plot of u and R(xp ) are shown below. 17 Exponential form input and corresponding particular solution
6 x 5
p u 4 u(t) = e−2t cos(5t) = R e(−2+j 5)t =⇒ particular solution is R H (s)est , s = −2 + j 5 10 =R e(−2+j 5)t −2 + j 5 + 10 3 2 1 0 1 2 3 4 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 seconds Thus, the transfer function enables us to quickly produce a particular solution when the input can be expressed as R(est ) for some s ∈ C. The transfer function has a more general role in describing solutions of ODEs but these details will have to wait until we introduce the Laplace transform. ♣ Special case when u(t) = cos(ωt) When u is sinusoidal it can be expressed as the real part of a complex exponential function u(t) = cos(ωt) = R ejωt . Thus, the sinusoidal particular solution is xp (t) = R H (jω )ejωt = H (jω ) cos(ωt + H (jω )). When setting s = jω in the transfer function, i.e. H (jω ), the resulting expression is of such fundamental importance in the study of linear systems that it is given its own name: H (jω ) is the frequency response function of the system. In general, H (jω ) is complexvalued, but it can be expressed in terms of two real quantities: its magnitude, which is denoted H (jω ), and its phase, which is denoted H (jω ). The magnitude and phase can be plotted versus frequency to generate frequency response plots. These plots are an extremely effective way of assessing the behavior of the system to sinusoidal inputs. Example. The frequency response plots (also called “Bode plots” in the control sys18 tems ﬁeld) of the lowpass circuit ODE x = −10x + 10u are shown below ˙ Magnitude plot:
10
1 magnitude (V/V) 10 0 10 1 10 2 10 2 10 1 10 0 10 1 10 2 10 3 frequency (rad/s) Phase plot:
100 80 60 40 phase (degrees) 20 0 20 40 60 80 100 10 2 10 1 10 0 10 1 10 2 10 3 frequency (rad/s) 19 Three Sides of the Same Coin
Firstorder, linear, constant coef. ODE x = ax + bu ˙
t x(0) = x0 u(t) deﬁned for t ≥ 0 =⇒ x(t) = e x0 +
at 0 ea(t−τ ) bu(τ )dτ (t ≥ 0) Convolution representation often a result of ﬁrstprinciples modeling useful form for numerical analysis including numerical solutions Transfer Function Representation x(t) =
−∞
Firstorder, linear, timeinvariant system ∞ u deﬁned for t ∈ (−∞, ∞) h(t − τ )u(τ )dτ “transfer function” H (s) = Summary Slide for FirstOrder Linear Systems 20 s = jω
“frequency response function” b s−a
H (j ω ) = b jω − a “impulse response function” 0 h(t) = eat b t<0 t≥0 u(t) = est =⇒ xp (t) = H (s)est for any s = a
useful for computing particular solutions when the input is of exponential form; more general interpretation once Laplace transforms are studied; useful for manipulating block diagrams frequency response functions give graphical insight into system behavior; more general interpretation once Fourier transforms are studied; frequency response testing is reliable method for identifying models of physical systems the impulse response function is often obtained as the result of a test on a physical system; the next step is to match an analytical model to the empirical impulse response function the convolution representation may also be used to deﬁne linear systems that cannot be described by ODEs (in other words, all ODEs “generate” an appropriate impulse response function, however, not all impulse response functions are associated with an ODE!) ...
View
Full
Document
 Spring '06
 TSAO

Click to edit the document details