This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Analytical Topics − A)−1 = (sI 1
s−1
1
3 1
s−1 − 0
1
s+2 1
s+2 Problem 1 as before. = 10
00
1
1 Solutions to Homework #8  Fall 2009.1
+
1
1
s−1 3 0
s + 2 −3 1 6. Another formula for the matrix exponential.
You might remember that for any complex number a ∈ C, ea = limn→∞ (1 + a/n)n .
You will establish the matrix analog: for any A ∈ Rn×n ,
eA = lim (I + A/n)n .
n→∞ To simplify things, you can assume A is diagonalizable.
Hint: diagonalize.
Solution:
Assuming A ∈ Rn×n is diagonalizable, there exists an invertible matrix T ∈ Rn×n such
that A = T diag(λ1 , . . . , λn )T −1 where λ1 , . . . , λn are the eigenvalues of A. Therefore
(I + A/n)n = (T T −1 + T diag(λ1 /n, . . . , λn /n)T −1 )n
= T (I + diag(λ1 /n, . . . , λn /n))T −1 n = T (I + diag(λ1 /n, . . . , λn /n))n T −1 .
But (I + diag(λ1 /n, . . . , λn /n)) is diagonal and therefore its nth power is simply a
diagonal matrix with diagonal entries equal to the nth power of the diagonal entries of
(I + diag(λ1 /n, . . . , λn /n)). Thus
(I + A/n)n = T diag((1 + λ1 /n)n , . . . , (1 + λn /n)n )T −1
and taking the limit as n → ∞ gives
lim (I + A/n) n→∞ n 11
= nlim T diag((1 + λ1 /n)n , . . . , (1 + λn /n)n )T −1
→∞ = T diag( lim (1 + λ1 /n)n , . . . , lim (1 + λn /n)n )T −1
n→∞
λ1 λn = T diag(e , . . . , e )T
= eA , −1 n→∞ and we are done.
7. Aﬃne dynamical systems.
A function f : Rn → Rm is called aﬃne if it is a linear function plus a constant, i.e., of
the form f (x) = Ax + b. Aﬃne functions are more general than linear functions, which
result when b = 0. We can generalize linear dynamical systems to aﬃne dynamical
systems, which have the form
x = Ax + Bu + f,
˙ y = Cx + Du + g. Fortunately we don’t need a whole new theory for (or course on) aﬃne dynamical systems; a simple shift of coordinates converts it to a linear dynamical system. Assuming
A is invertible, deﬁne x = x + A−1 f and y = y − g + CA−1 f . Show that x, u, and y
˜
˜
˜
˜
are the state, input, and output of a linear dynamical system.
Solution:
All we have to do is to show that x, u and y satisfy a linear dynamical system. First
˜
˜
note that
dx
˜
d
dx
= (x + A−1 f ) =
(A−1 f ∈ Rn is constant)
dt
dt
dt
c J.C. Cockburn
page 1 of 7
and therefore plot(t,[S(i+1,2);S(8,2);S(8,2)])
hold on;
Analytical Topics plot(t,[S(i+1,2);S(8,2);S(8,2)],’o’) Solutions to Homework #8  Fall 2009.1
plot(t_comp,[S_comp(i_comp+1,2);S_comp(14,2);S_comp(14,2)],’’)
plot(t_comp,[S_comp(i_comp+1,2);S_comp(14,2);S_comp(14,2)],’o’)
Problem 2
axis([1,8,3,3]); grid on; xlabel(’t’); ylabel(’s22’)
2. Properties of the matrix exponential.
(a) Show that eA+B = eA eB if A and B commute, i.e., AB = BA. The converse is
also true, i.e., if eA+B = eA eB then A and B commute. (But it is hard to show.)
(b) Carefully show that d At
e
dt = AeAt = eAt A. Solution:
(a) We will show that if A and B commute then eA eB = eA+B . We begin by writing
the expressions for eA and eB
eA = I + A + A2 A3
+
+···
2!
3! B2 B3
+
+···
2!
3!
Now we multiply both expressions and get
eB = I + B + A2 B 2 A3 A2 B AB 2 B 3
+
+
+
+
+
+···
2!
2!
3!
2!
2!
3!
A2 + 2AB + B 2 A3 + 3A2 B + 3AB 2 + B 3
+
+···
= I +A+B+
2!
3!
Now we note that, if A and B commute, we are able to write things such as
6
(A + B )2 = A2 + 2AB + B 2 . So, if A and B commute we can ﬁnally write
eA eB = I + A + B + AB + eA eB = I + (A + B ) + (A + B )2 (A + B )3
+
+ · · · = eA+B
2!
3! (b) It suﬃces to note that A commute with itself. Then one can write
A 3 t2
deAt
= A + A2 t +
+···
dt
2!
(At)2
+ · · ·)
= A(I + At +
2!
(At)2
= (I + At +
+ · · ·)A
2!
= AeAt = eAt A
3. Determinant of matrix exponential.
(a) Suppose the eigenvalues of A ∈ Rn×n are λ1 , . . . , λn . Show that the eigenvalues of
eA are eλ1 , . . . , eλn . You can assume that A is diagonalizable, although it is true
in the general case.
(b) Show that det eA = eTr A .
Hint: det X is the product of the eigenvalues of X , and Tr Y is the sum of the
eigenvalues of Y .
Solution:
(a) Suppose that A is diagonalizable with eigenvalues λ1 , . . . , λn . Therefore, the in 2 of 7
c J.C. Cockburn
page
vertible matrix T exists such that dt
Analytical Topics Problem 3 2!
(At)2
+ · · ·)
= A(I + At +
2! Solutions to Homework #8  Fall 2009.1
(At)2
= (I + At +
+ · · ·)A
2!
= AeAt = eAt A 3. Determinant of matrix exponential.
(a) Suppose the eigenvalues of A ∈ Rn×n are λ1 , . . . , λn . Show that the eigenvalues of
eA are eλ1 , . . . , eλn . You can assume that A is diagonalizable, although it is true
in the general case.
(b) Show that det eA = eTr A .
Hint: det X is the product of the eigenvalues of X , and Tr Y is the sum of the
eigenvalues of Y .
Solution:
(a) Suppose that A is diagonalizable with eigenvalues λ1 , . . . , λn . Therefore, the invertible matrix T exists such that
A = T diag(λ1 , . . . , λn )T −1
and we get
eA = T ediag(λ1 ,...,λn ) T −1 = T diag(eλ , . . . , eλ )T −1 .
1
n
As a result
eA T = T diag(eλ1 , . . . , eλn )
which shows that the eigenvalues of eA are eλ1 , . . . , eλn . Note that this also shows
that the eigenvectors of A (the columns of T ) and eA are the same.
(b) The determinant of a matrix is equal to the product of its eigenvalues and therefore
det eA = eλ1 eλ2 · · · eλn = eλ1 +λ2 +···+λn .
But λ1 + λ2 + · · · + λn is the sum of the eigenvalues of A which is equal to Tr A.
Thus
det eA = eTr A .
7 c J.C. Cockburn page 3 of 7 Analytical Topics Solutions to Homework #8  Fall 2009.1 Problem 4 4. Characteristic polynomial.
Consider the characteristic polynomial X (s) = det(sI − A) of the matrix A ∈ Rn×n .
(a) Show that X is monic, which means that its leading coeﬃcient is one: X (s) =
sn + · · ·. (b) Show that the sn−1 coeﬃcient of X is given by − Tr A. (Tr X is the trace of a
matrix: Tr X = n=1 Xii .)
i
(c) Show that the constant coeﬃcient of X is given by det(−A). (d) Let λ1 , . . . , λn denote the eigenvalues of A, so that X (s) = sn + an−1 sn−1 + · · · + a1 s + a0 = (s − λ1 )(s − λ2 ) · · · (s − λn ).
By equating coeﬃcients show that an−1 = − n
i=1 λi and a0 = n
i=1 (−λi ). Solution:
(a) Expand the determinant expression to get
˜
det(sI − A) = (s − a11 ) det A + other terms,
˜
where A is the A matrix without the ﬁrst row and ﬁrst column. The other terms
are similar, except for the fact that the determinant is multiplied by a scalar.
˜
Expanding det A we will reach a similar equation, and after expanding all terms
you will reach something like
n det(sI − A) = i=1 (s − aii ) + other terms. The other terms contribute with polynomials whose order is less than n, and since
the ﬁrst term is a monic polynomial with order n it follows that det(sI − A) is
also monic.
(b) Let’s take a closer look at the relation
n det(sI − A) = i=1 (s − aii ) + other terms. A little reasoning will show us that the other terms in fact are polynomials whose
degree is less than n − 1 (provided that n > 1, and for n = 1 we have the trivial
˜
case). This is so because in the ﬁrst expression of item (a) we have that A is the
only matrix that has n − 1 entries with s, and the same applies to other expansions
of the expression. Then it follows that the sn−1 term of X is the sn−1 term of
(s − aii ). But this term is −aii , which is equal to − Tr A. (c) The constant coeﬃcient is given by X (0). But X is simply det(sI − A). By taking
s = 0 it follows that X (0) = det(−A). c J.C. Cockburn 8 page 4 of 7 Analytical Topics Solutions to Homework #8  Fall 2009.1 (d) First we note that, if n = 1, the relations are valid for the polynomial s − λ1 .
Now suppose the relations are valid for a monic polynomial P (s). Multiply P (s)
by s − λi and expand as
P (s)(s − λi ) = sP (s) − λi P (s).
Suppose P (s) has degree n. Then sP (s) is monic with degree n + 1 and the constant coeﬃcient is zero. The polynomial −λi P (s) has degree n, the sn coeﬃcient
is −λi and the constant coeﬃcient is (−λj ). Since the constant coeﬃcient of
sP (s) is zero we conclude by induction that a0 = n=1 (−λi ). Since P (s) satisﬁes
i
the properties, the sn term of P (s) is λj and we conclude, again by induction,
that an−1 = − n=1 λi .
i
5. Spectral resolution of the identity.
Suppose A ∈ Rn×n has n linearly independent eigenvectors p1 , . . . , pn , pT pi = 1, i =
i
Problem 5
T
1, . . . , n, with associated eigenvalues λi . Let P = [p1 · · · pn ] and Q = P −1 . Let qi be
Matrix the ith row of Q.
Functions
T
(a) Let Rk = pk qk . What is the range of Rk ? What is the rank of Rk ? Can you
1
2
describe the null space of Rk ?
Let A =
.
−2 −3
2
(b) Show that Ri Rj = 0 for i = j . What is Ri ? a) Use the techniques developed in class to evaluate functions of matrices to ﬁnd sin(A) and cos(A) in
(c) Show that
n
closed form.
Rk
2 (A) + cos2 (A) = I
.
(sI − A)−1 =
b) Prove that sin
s
k =1If − λk numerically give an estimate of the
c) Are the following formulas both correct ? show it.
done
accuracy of your results. a partial fraction expansion of (sI − A)−1 . For this reason the
Note that this is Ri ’s are called the residue matrices of A. tan(A) = (cos(A))−1 sin(A) (d) Show that R1 + · · · + Rn tan(A)For this reason the −1
= I.
residue matrices are said to
= sin(A) (cos(A))
constitute a resolution of the identity.
Solutions Find the residue matrices for
(e)
Part a) Since det(sI − A) = s2 + 2s + 1 the A has one eigenvalue λ = −1 of multiplicity two. 1
0
Let f (s) = sin(s) and since A is a 2 × 2 matrix deﬁne g1 s)−2α0 + α1 s. Then
(=
A= f above
= sin( P = α Q α1 ( then
b oth ways described (s)s=−1(i.e., ﬁnd −1)and 0 +and−1) calculate the R’s, and
then do a partial fraction expansion of (sI − A)−1 to ﬁnd the R’s).
df Solution: ds = cos(−1) = α1 s=−1 Therefore (a) Note that because Rk may be complex, the linear spaces we work with assume
α0 is cos(1) − Using Rk = pk q T
that scalar multiplication = complex. sin(1) = −0.3012k ,
α1 = cos(1) = 0.5403 T
R(Rk ) = { Rk x  x ∈ Cn } = { pk qk x  x ∈ Cn }
Similarly for f (s) = cos(s) and g (s) ==0 + β1 s  α ∈ C} = span{p }.
β { αp
k
k f (s)s=−1 = cos(−1) = β0 + β1 (−1)
df
= − sin(−1) = β1
9
ds s=−1
c J.C. Cockburn page 5 of 7 Analytical Topics Solutions to Homework #8  Fall 2009.1 Therefore
β0 = cos(1) + sin(1) = 1.3818
β1 = sin(1) = 0.8415
Therefore,
sin(A) =
cos(A) = Part b) 2 cos(1) − sin(1)
2 cos(1)
0.2391
1.0806
=
,
−2 cos(1) −2 cos(1) − sin(1)
−1.0806 −1.9221
cos(1) + 2 sin(1)
2 sin(1)
2.2232
1.6829
=
−2 sin(1) cos(1) − 2 sin(1)
−1.6829 −1.1426 From the results of part a) we obtain
sin2 (A) = sin2 (1) − 4 cos(1) sin(1)
−4 cos(1) sin(1)
,
4 cos(1) sin(1) sin2 (1) + 4 cos(1) sin(1) cos2 (A) = cos2 (1) + 4 cos(1) sin(1)
4 cos(1) sin(1)
2 (1) − 4 cos(1) sin(1)
−4 cos(1) sin(1) cos Therefore
sin2 (A) + cos2 (A) = sin2 (1) + cos2 (1)
0
2 (1) + cos2 (1) = I
0 sin Part c)
Let f (s) = tan(s) and g (s) = γ0 + γ1 s. Then
f (s)s=−1 = tan(−1) = α0 + γ1 (−1)
df
= sec2 (−1) = γ1
ds s=−1
Therefore
γ0 = − tan(1) + sec2 (1) = 1.8681
γ1 = sec2 (1) = 3.4255
and
tan(A) =
= − tan(1) + 2 sec2 (1)
2 sec2 (1)
2 (1) − tan(1) − 2 sec2 (1)
2 sec
5.2936
6.8510
−6.8510 −8.4084 From part a)
(cos(A))−1 = 2.2232
1.6829
−3.9140 −5.7649
=
−1.6829 −1.1426
5.7649
7.6157 It can be veriﬁed that
sin(A)(cos(A))−1 = (cos(A))−1 sin(A) =
c J.C. Cockburn 5.2937
6.8511
−6.8511 −8.4086
page 6 of 7 Analytical Topics Solutions to Homework #8  Fall 2009.1 The error between the numerical values of tan(A) and the above expressions is
0.0573
0.1253
× 10−3
−0.1253 −0.1934
This suggest that the approximation is of the order of 10−4 . This is a very poor approximation generally
unacceptable. For better results more signiﬁcant digits must be considered in the evaluation of each of the
functions. Note however that since only four signiﬁcant digits were considered this is not an unexpected
result.
Note: Actually parts b) and c) should not be solved as above since that approach does not prove that
sin2 (A) + cos2 (A) = I or tan(A) = (cos(A))−1 sin(A) for any A ∈ Rn×n . I only shows the validity of the
above formulas for the particular matrix in question.
A more general approach uses power series expansions and their convergence properties. For example,
1
2
18
sin2 (x) = x2 − x4 + x6 −
x + ···
3
45
315
1
2
18
cos2 (x) = 1 − x2 + x4 − x6 +
x + ···
3
45
315
5
61 6
277 8
1
x+
x + ···
(cos(x))−1 = 1 + x2 + x4 +
2
24
720
8064
1
2
17 7
62 9
tan(x) = x + x3 + x5 +
x+
x + ···
3
15
315
2835
2
17 7
62 9
1
x+
x + ···
(cos(x))−1 sin(x) = x + x3 + x5 +
3
15
315
2835
Therefore, provided that the spectrum of A is included in the domain of analyticity of each of the above
functions (which implies that the above series converge) it is clear that
sin2 (A) + cos( A) = I
tan(A) = (cos(A))−1 sin(A) = sin(A)(cos(A))−1 c J.C. Cockburn page 7 of 7 ...
View
Full
Document
This note was uploaded on 07/28/2011 for the course EE 263 taught by Professor Boyd,s during the Summer '08 term at Stanford.
 Summer '08
 BOYD,S

Click to edit the document details