EE 205A Matrix Analysis
Fall 2014
Class meets MW 10 am- 12 noon in 1260 Franz
Instructor: Lara Dolecek
[email protected]
OH Wed 2-3 pm, Eng IV, 56-147B
Special Reader: Chu-Hsiang (Sean) Huang
seanhuan
12
Introduction to Switched-Capacitor Circuits
Our study of ampliers in previous chapters has dealt with only cases where the input signal is continuously available and applied to the circuit and the output signal is continuously observed. Called continuo
Prof. Alan J. Laub December 10, 2007
EE 205A FINAL EXAMINATION
Fall 2007
Instructions:
(a) The exam is closedbook (except for one page of notes) and will last 2 hours.
(b) Notation will conform as closely as possible to the standard notation used in the t
Chapter 10
Canonical Forms
1. Show that if a triangular matrix is normal, then it must be diagonal.
Answer 10.1 We prove this by induction on n. Let T Cnn be
normal and, without loss of generality, assume it is upper triangular.
For n = 1, the matrix T is
Chapter 7
Projections, Inner Product
Spaces, and Norms
1. If P is an orthogonal projection, prove that P + = P .
Answer 7.1 Straightforward verification of the four Penrose conditions.
2. Suppose P and Q are orthogonal projections and P + Q = I. Prove
tha
Chapter 9
Eigenvalues and Eigenvectors
1. Let A Cnn have distinct eigenvalues 1 , . . . , n with corresponding
right eigenvectors x1 , . . . , xn and left eigenvectors y1 , . . . , yn , respectively. Let v Cn be an arbitrary vector. Show that v can be exp
Chapter 11
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
Linear Differential and
Difference Equations
1. Let P IRnn be a projection. Show that eP I + 1.718P .
Answer 11.1
"
+
!
#
+
+
! 1
! 1
1 k
e =
P = I+
P = I+
1 P = I+(e1)P I+1.718 P.
k!
k
Chapter 13
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
Kronecker Products
1. For any two matrices A and B for which the indicated matrix product
is defined, show that (vec(A)T (vec(B) = Tr (AT B). In particular, if
B IRnn then Tr (B) = vec(I
5.3
Orthonormal set of vectors
77
Proof of independence-dimension inequality. The proof is by induction on the
dimension n. First consider a linearly independent set cfw_a1 , . . . , ak of 1-vectors.
We must have a1 6= 0. This means that every element ai
1 Discussion session week 8
1.1 Chapter 7
Given the following system of of equations:
x + 2y = 3
3x + 2y = 5
x + y = 2.09
Find the least squares solution for x and y. Define the matrix A and the vectors b and
z as follows:
1 2
3
x
A = 3 2 , b = 5 , z =
Solutions to the Exercises
Choose any (3,13 6 EF and 1,172 6 RN 5. Then 91mg 6 R and 111,112 6 8, so
(on); + ag) 6 7?. Fl 5. Therefore RD S is a vector space, and hence a subspace
of V.
. Let Pn denote the vector space of polynomials of degree less than o
Matrix Analysis for Scientists and Engineers 7
_ cfw_V2) Choose any 05,13 E IR andp E 73. Then (oz-,3)-p 2 (05,3) (pa+p19:+p2$2) m
@5100 + 043mm + 0131321132 = 045100 + 131013 + 3192562) = 043000 +191ac +P2932D w
a ' ( 4)-
(V3) Choose any 05,13 E 13. an
24 Solutions to the Exercises
3. Let m,y 6 IR and suppose further that mTy 79 1. Show that
(I - $ny1 = I 1 myT.
acTyl
By the ShermanMorrisonWoodbury formula,
(I MTV (I + 96(-~1)29T)_1
I Ia:(1 + yTIm)"1yTI
I :4mi 1)1yT
1
I T.
mTy - 193g
H
H
H
4. Let m,
Chapter 7
sh is
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
Projections, Inner Product
Spaces, and Norms
1. If P is an orthogonal projection, prove that P + = P .
Answer 7.1 Straightforward verification of the four Penrose conditions.
2. Sup
General Proof Techniques
EE 205A (Laub)
Let P, Q be mathematical statements.
Proof by forward logic: To prove P Q, you often have to go through a number of
steps and show that P P1 P2 Q.
Proof of if and only if statements: To prove P Q, you generally need
Chapter 3
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
Linear Transformations
!
"
2 3 4
1. Let A =
and consider A as a linear transformation map8 5 1
ping IR3 to
IR2 . Find
of A with respect
thematrix
representation
0!
" !
"1
0
1
1
3
2
Determining a JCF
EE 205A (Laub)
Suppose we have a matrix A IR4 with characteristic polynomial 4 = 0 and we wish to
determine its Jordan canonical form J. There are four cases to consider based on the value
of the geometric multiplicity g = dim N (A I) =
Chapter 8
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
Linear Least Squares
Problems
1. For A IRmn , b IRm , and any y IRn , check directly that (I
A+ A)y and A+ b are orthogonal vectors.
Answer 8.1
T
T
(I A+ A)y) A+ b = y T (I A+ A) A+ b
=
Computation of the Jordan Canonical Form An Example
(Hand Computation)
EE 205A (Laub)
5 2 1
2 1 and the nonsingular matrix X of
Compute the JCF of the matrix A = 1
1
2
5
eigenvectors and/or principal vectors that effect the similarity transformation.
The
Chapter 4
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
An Introduction to the
Moore-Penrose
Pseudoinverse
1. Use Theorem 4.4 to compute the pseudoinverse of
!
"
1 1
.
2 2
Answer 4.1
A+ = lim (AT A + 2 I)1 AT
0
= lim
sh
Th
is
0
= lim
0
#!
!
1
Chapter 5
sh is
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
An Introduction to Singular
Value Decomposition
1. Let X IRmn . If X T X = 0, show that X = 0.
Answer 5.1 This is easily seen directly. Let the n columns of X be
denoted by xi . The
Chapter 9
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
Eigenvalues and Eigenvectors
1. Let A Cnn have distinct eigenvalues 1 , . . . , n with corresponding
right eigenvectors x1 , . . . , xn and left eigenvectors y1 , . . . , yn , respectivel
Chapter 1
sh is
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
Introduction and Review
1. If A IRnn and is a scalar, what is det(A)? What is det(A)?
Answer 1.1 Suppose A IRnn and IR. Denote the identity
of IRnn by In . Then det(A) = det(In A) =
Chapter 10
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
Canonical Forms
1. Show that if a triangular matrix is normal, then it must be diagonal.
Answer 10.1 We prove this by induction on n. Let T Cnn be
normal and, without loss of generality,
The Moore-Penrose Pseudoinverse
(Math 33A: Laub)
In these notes we give a brief introduction to the Moore-Penrose pseudoinverse, a generalization of the inverse of a matrix. The Moore-Penrose pseudoinverse is defined for any
matrix and is unique. Moreover
Chapter 2
ar stu
ed d
vi y re
aC s
o
ou urc
rs e
eH w
er as
o.
co
m
An Introduction to Vector
Spaces
1. Suppose cfw_v1 , . . . , vk is a linearly dependent set. Then show that one
of the vectors must be a linear combination of the others.
Answer 2.1 Sinc
Dynamical Systems Examples
EE 205A (Laub)
5 2 1
1
1
2 1 . This is
2
when
A
=
Example I. Solve dx
=
x(t)
=
Ax(t)
;
x(0)
=
dt
1
2
5
3
called an initial value problem (IVP).
Recall from a previous example that 1 = 2 = 3 = 4 and that
(X 1
0
1
1
0 1 )
= 0
1 2
Matrix Analysis for Scientists and Engineers 21
Then it is easy to verify that Q : U VT is orthogonal and P = VEVT is
symmetric and positive denite.
i
5
1
i
3
3
Copyright 2003 by Alan J. Luub.
18 Solutions to the Exercises
now by 31 gives the equation
S1U1TAATU131 = I. (i) L
Turning now to the eigenequation corresponding to the eigenvalues 0TH, . . . ,am I
we have that AATU2 = U20 = O, whence U2TAATU2 = 0. Thus UgTA = 0.
Now dene the matrix V;
Matrix Analysis for Scientists and Engineers 25
If we show that N(B) 75 0, then this will mean that B is singular since it will
not have full rank. Now Bci = Aci freiefci = ei 31761731 2 2.5 e; 2 O.
11 jl
Hence, Ci 6 N(B). But ci # 0 since A is nonsingul
&
\ThLOYI/M 2.sz Let Q,SEK".WM,
RES 7'? and 0);? r7; S'LSQJ.
lei Vader 0' e S 'L
8 Ni 6M Mfwv that U'J- 3 a5 17M Prtvfous Set
as long as We Know that RES,
30 .L R as M
Q V e KL '9 than 7%. the Legmj WhTol/a U'GSL
:) 5i 9 K1 is Proved
6119
W 3% Let AkKUFPJ
Solutions to the Exercises
Define V = (111,.,vn) E 18.7. The matrix V is square with orthonor
mal columns, so the rank of V is n. Thus V has an inverse, namely V~1 = VT
since VTV = I. Hence VVT = I 2 WV, i.e., V is orthogonal.
We must show that (Avg-)WAW)