Elimination as factorization
87
The rank of a matrix and of its (conjugate) transpose
In this section, let A denote either the transpose or the conjugate transpose of the matrix A. Then,
either way, A = V W i A = W V . This trivial observation implies all
Every complex (square) matrix is similar to an upper triangular matrix
113
10.20 T/F
(a) The only diagonalizable matrix A having just one factorization A = V MV 1 with M diagonal is the empty matrix.
(b) If A is the linear map of multiplication by a scala
Three interesting properties of the power sequence of a linear map
115
showing that our denition of what it means for Ak to converge to B is independent of the particular matrix
norm we use. We might even have chosen the matrix norm
A
:= max |A(i, j )| =
Splitting o the nondefective eigenvalues
117
Splitting o the nondefective eigenvalues
Recall that the scalar is called a defective eigenvalue for A L(X ) in case
null(A id) ran(A id) = cfw_0.
(11.5)Proposition: If M is a set of nondefective eigenvalues of
Three interesting properties of the power sequence of a linear map: The sequel
119
Since A is of rank 1, dim null A = n 1. Let V be a basis for null A, i.e., V L(Rn1 , null A) invertible.
Then U := [V, x] is 1-1 (hence a basis for Rn ) if and only if x ra
The power method
121
The power method
The simple background for the success of the power method is the following corollary to (11.10)Theorem (ii).
(11.12) Proposition: If A has just one eigenvalue of absolute value (A) and is nondefective,
then, for almos
The Schur form
123
hence having Ac A = AAc is a necessary condition for A to be unitarily similar to a diagonal matrix.
Remarkably, this condition is sucient as well. Note that this condition can be directly tested by computing
the two products and compar
The primary decomposition
125
(12.8) Lemma: Let p be a product of elements of QA ,
q (A)dq
p =:
qQ
A
say, with dq N and Q a nite subset of QA . Then,
A
(12.9)
Xp := null p(A) = + null q (A)dq ,
qQ
A
i.e., Xp = null p(A) is the direct sum of the spaces Yq
The Jordan form
127
The Jordan form
The Jordan form is the result of the search for the simplest matrix representation for A L(X ) for
some n-dimensional vector space X . It starts o from the following observation.
Suppose X is the direct sum
(12.14)
X =
The Jordan form
129
(12.17) Proposition: Let A =: diag(J (Y , dim Y ) : Y Y ) be a Jordan canonical form for A
L(X ). Then
(i) spec(A) = cfw_A(j, j ) : j = 1:n = Y Y spec(A
(ii) For each spec(A) and each q ,
(12.18)
Y
).
nq := dim null(A id)q =
min(q, di
The trace of a linear map
131
(13.4) Proposition: Any diagonally dominant matrix is invertible.
In particular, the rst of the three matrices in (13.3) we now know to be invertible. As it turns out, the
other two are also invertible; thus, diagonal dominan
Every complex (square) matrix is similar to an upper triangular matrix
111
A more fundamental reason is that, once we have an upper triangular matrix similar to A, then we
know the entire spectrum of A since, by (10.7)Proposition, the spectrum of a triang
It is enough to understand the eigenstructure of matrices
109
Here, for the record, is a formal account of what we have proved.
(10.22) Proposition: For every A L(X ) with dim X < and every x X \0, there is a unique
monic polynomial p of smallest degree f
SVD
89
SVD
Let A = V W c be a minimal factorization for the m n-matrix A of rank r. Then Ac = W V c is a
minimal factorization for Ac . By (8.2), this implies that V is a basis for ran A and W is a basis for ran Ac .
Can we choose both these bases to be o
The eective rank of a noisy matrix
91
hence to
W W c ? = W 1 V c b.
(8.13)
Since W is also o.n., W W c = PW is an o.n. projector, hence, by (6.15)Proposition, strictly reduces norms
unless it is applied to something in its range. Since the right-hand side
Equivalence and similarity
93
and for it, MATLAB correctly returns id3 as its rref. However, the singular values of Ac , as returned by svd,
are
(3.2340., 0.5645., 0.000054.)
indicating that there is a rank-2 matrix B with Ac B 2 < .000055. Since entries
Complementary mathematical concepts
95
9. Duality
This short chapter can be skipped without loss of continuity. Much of it can serve as a review of what
has been covered so far. It owes much to the intriguing book [ citGL ].
Complementary mathematical con
The dual of a vector space
97
is 1-1: Indeed, if a = 0, then i a(i)i is the zero functional, hence, in particular, i a(i)i vj = 0
for all columns vj of V . This implies that 0 = ( i a(i)i vj : j = 1:n) = at (t V ) = at idn = at , hence a = 0.
It follows
The dual of a linear map
99
If now n := dim Y < , then, by (9.1)Proposition, dim Y = dim Y = n, hence, by the Dimension
Formula, y y c must also be onto. This proves
(9.6) Proposition: If Y is a nite-dimensional inner product space, then every Y can be wr
Eigenvalues and eigenvectors
101
vertex i to vertex j is the sum of the probabilities that we would have gone from i to some k in the rst step
and thence to j in the second step, i.e., the number
2
Mi,k Mk,j = Mi,j .
k
m
More generally, the probability th
Diagona(liza)bility
103
To be sure, if A is not 1-1, then at least one of the j must be zero, but this doesnt change the fact that M
is a diagonal matrix.
21
maps the 2-vector x := (1, 1) to 3x and the 2-vector
12
y := (1, 1) to itself. Hence, A[x, y ] =
Does every square matrix have an eigenvalue?
105
hence, by (10.11)Proposition, such A is not diagonable.
This has motivated the following
Denition: The scalar is a defective eigenvalue of A if
null(A id) ran(A id) = cfw_0.
Any such certainly is an eigenva
Polynomials in a linear map
107
(10.18) Example: Lets try this out on our earlier example, the rotation matrix
A := [e2 , e1 ].
Choosing x = e1 , we have
[x, Ax, A2 x] = [e1 , e2 , e1 ],
hence the rst free column is A2 x = e1 , and, by inspection,
x + A2
Determinants
133
i.e., addition of a scalar multiple of one argument to a dierent argument does not change the determinant.
In particular, if A = [a1 , a2 , . . . , an ] is not invertible, then det A = 0 since then there must be some
column aj of A writab
The multiplicities of an eigenvalue
135
and this, we claim, is necessarily the zero map, for the following reason: The factor (A j id) is upper
triangular, with the j th diagonal entry equal to zero. This implies that, for each i, (A j id) maps
Ti := ran[
Rayleigh quotient
161
The MMM theorem has various useful (and immediate) corollaries.
(15.4) Interlacing Theorem: If the matrix B is obtained from the hermitian matrix A by
crossing out the k th row and column (i.e., B = A(I, I ) with I := (1:k 1, k + 1:n
Denition and basic properties
163
16. More on determinants
In this chapter only, n-vectors will be denoted by lower-case boldface roman letters; for example,
a = (a1 , . . . , an ) IFn .
Determinants are often brought into courses such as this quite unnec
Denition and basic properties
165
(b) count the number of pairs that are out of order; its parity is the parity of .
Here is a simple example: = (3, 1, 4, 2) has the pairs (3, 1), (3, 2), and (4, 2) out of order, hence
(1) = 1. Equivalently, the following
Sylvester
167
For n > 3, this is a denition, while, for n 3, one works it out (see below). This is a very
useful geometric way of thinking about determinants. Also, it has made determinants indispensable
in the denition of multivariate integration and the
Cauchy-Binet
169
Sylvesters determinant identity. If
S (i, j ) := det A(k, i|k, j )/ det A(k),
then
i, j,
det S (i|j) = det A(k, i|k, j)/ det A(k).
Cauchy-Binet
Cauchy-Binet formula. det(BA)(i|j) =
#h=#i det B (i|h)
det A(h|j).
Even the special case #i =