Unformatted text preview: LINEAR ALGEBRA
W W L CHEN
c W W L Chen, 1997, 2005. This chapter is available free to all individuals, on the understanding that it is not to be used for ﬁnancial gain,
and may be downloaded and/or photocopied, with or without permission from the author.
However, this document may not be kept on any information storage and retrieval system without permission
from the author, unless such system is not accessible to any individuals other than its owners. Chapter 10
ORTHOGONAL MATRICES 10.1. Introduction
Definition. A square matrix A with real entries and satisfying the condition A−1 = At is called an
orthogonal matrix.
Example 10.1.1. Consider the euclidean space R2 with the euclidean inner product. The vectors
u1 = (1, 0) and u2 = (0, 1) form an orthonormal basis B = {u1 , u2 }. Let us now rotate u1 and u2
anticlockwise by an angle θ to obtain v1 = (cos θ, sin θ) and v2 = (− sin θ, cos θ). Then C = {v1 , v2 } is
also an orthonormal basis. O
7
7
7
7
7
7
7
7
ww
7
u2
ww
7
w
7
77
ww
7
7
v2 77
ww
ww
77
ww
77
ww
77
ww
wwv1
77
w
7
ww
ww
7
7
ww
7
ww
7
7
w
7
ww
7
7
ww
7 www
7 wwθ
_ _ _ _ _ _ _ _ _w
u1 Chapter 10 : Orthogonal Matrices / ___ page 1 of 10 c Linear Algebra W W L Chen, 1997, 2005 The transition matrix from the basis C to the basis B is given by
P = ( [v1 ]B cos θ
sin θ [v2 ]B ) = − sin θ
cos θ . Clearly
P −1 = P t = cos θ
− sin θ sin θ
cos θ . In fact, our example is a special case of the following general result.
PROPOSITION 10A. Suppose that B = {u1 , . . . , un } and C = {v1 , . . . , vn } are two orthonormal
bases of a real inner product space V . Then the transition matrix P from the basis C to the basis B is
an orthogonal matrix.
Example 10.1.2. The matrix 1/3
A = 2/3
2/3 −2/3
−1/3
2/3 2/3
−2/3 1/3 is orthogonal, since 1/3
At A = −2/3
2/3 2/3 2/3
1/3
−1/3 2/3 2/3
−2/3 1/3
2/3 −2/3
−1/3
2/3 2/3
1
−2/3 = 0
1/3
0 0
1
0 0
0.
1 Note also that the row vectors of A, namely (1/3, −2/3, 2/3), (2/3, −1/3, −2/3) and (2/3, 2/3, 1/3) are
orthonormal. So are the column vectors of A.
In fact, our last observation is not a coincidence.
PROPOSITION 10B. Suppose that A is an n × n matrix with real entries. Then
(a) A is orthogonal if and only if the row vectors of A form an orthonormal basis of Rn under the
euclidean inner product; and
(b) A is orthogonal if and only if the column vectors of A form an orthonormal basis of Rn under the
euclidean inner product.
Proof. We shall only prove (a), since the proof of (b) is almost identical. Let r1 , . . . , rn denote the row
vectors of A. Then r1 · r1
t
.
.
AA =
.
rn · r1 ... r1 · rn
.
.
.
. . . . rn · rn It follows that AAt = I if and only if for every i, j = 1, . . . , n, we have
ri · rj = 1
0 if i = j ,
if i = j , if and only if r1 , . . . , rn are orthonormal.
PROPOSITION 10C. Suppose that A is an n × n matrix with real entries. Suppose further that the
inner product in Rn is the euclidean inner product. Then the following are equivalent:
(a) A is orthogonal.
(b) For every x ∈ Rn , we have Ax = x .
(c) For every u, v ∈ Rn , we have Au · Av = u · v.
Chapter 10 : Orthogonal Matrices page 2 of 10 c Linear Algebra W W L Chen, 1997, 2005 Proof. ((a)⇒(b)) Suppose that A is orthogonal, so that At A = I . It follows that for every x ∈ Rn , we
have
Ax 2 = Ax · Ax = xt At Ax = xt I x = xt x = x · x = x 2 . ((b)⇒(c)) Suppose that Ax = x for every x ∈ Rn . Then for every u, v ∈ Rn , we have
Au · Av = 1
4 Au + Av 2 − 1 Au − Av
4 2 = 1
4 A(u + v) 2 − 1 A(u − v)
4 2 = 1
4 u+v 2 − 1 u−v
4 2 = u · v. ((c)⇒(a)) Suppose that Au · Av = u · v for every u, v ∈ Rn . Then
I u · v = u · v = Au · Av = vt At Au = At Au · v,
so that
(At A − I )u · v = 0.
In particular, this holds when v = (At A − I )u, so that
(At A − I )u · (At A − I )u = 0,
whence
(At A − I )u = 0, (1) in view of Proposition 9A(d). But then (1) is a system of n homogeneous linear equations in n unknowns
satisﬁed by every u ∈ Rn . Hence the coeﬃcient matrix At A − I must be the zero matrix, and so At A = I . Proof of Proposition 10A. For every u ∈ V , we can write
u = β1 u1 + . . . + βn un = γ1 v1 + . . . + γn vn , where β1 , . . . , βn , γ1 , . . . , γn ∈ R, and where B = {u1 , . . . , un } and C = {v1 , . . . , vn } are two orthonormal bases of V . Then
n u 2 n n = u, u = β1 u1 + . . . + βn un , β1 u1 + . . . + βn un = 2
βi βi βj ui , uj =
i=1 j =1 i=1 = (β1 , . . . , βn ) · (β1 , . . . , βn ).
Similarly,
n u 2 n n = u, u = γ1 v1 + . . . + γn vn , γ1 v1 + . . . + γn vn = 2
γi γi γj vi , vj =
i=1 j =1 i=1 = (γ1 , . . . , γn ) · (γ1 , . . . , γn ).
It follows that in Rn with the euclidean norm, we have [u]B = [u]C , and so P [u]C = [u]C for
every u ∈ V . Hence P x = x holds for every x ∈ Rn . It now follows from Proposition 10C that P
is orthogonal.
Chapter 10 : Orthogonal Matrices page 3 of 10 c Linear Algebra W W L Chen, 1997, 2005 10.2. Eigenvalues and Eigenvectors
In this section, we give a brief review on eigenvalues and eigenvectors ﬁrst discussed in Chapter 7.
Suppose that a11
.
.
A=
.
an1 a1n
.
.
. ... . . . ann is an n × n matrix with real entries. Suppose further that there exist a number λ ∈ R and a nonzero
vector v ∈ Rn such that Av = λv. Then we say that λ is an eigenvalue of the matrix A, and that v is
an eigenvector corresponding to the eigenvalue λ. In this case, we have Av = λv = λI v, where I is the
n × n identity matrix, so that (A − λI )v = 0. Since v ∈ Rn is nonzero, it follows that we must have
det(A − λI ) = 0. (2)
In other words, we must have a11 − λ a21
det . .
.
an1 a12
a22 − λ
an2 ...
.. a1n
a2n
.
.
. .
. . . ann − λ = 0. Note that (2) is a polynomial equation. The polynomial det(A−λI ) is called the characteristic polynomial
of the matrix A. Solving this equation (2) gives the eigenvalues of the matrix A.
On the other hand, for any eigenvalue λ of the matrix A, the set
{v ∈ Rn : (A − λI )v = 0} (3) is the nullspace of the matrix A − λI , and forms a subspace of Rn . This space (3) is called the eigenspace
corresponding to the eigenvalue λ.
Suppose now that A has eigenvalues λ1 , . . . , λn ∈ R, not necessarily distinct, with corresponding
eigenvectors v1 , . . . , vn ∈ Rn , and that v1 , . . . , vn are linearly independent. Then it can be shown that
P −1 AP = D,
where P = ( v1 . . . vn ) and D= λ1
.. . .
λn In fact, we say that A is diagonalizable if there exists an invertible matrix P with real entries such
that P −1 AP is a diagonal matrix with real entries. It follows that A is diagonalizable if its eigenvectors
form a basis of Rn . In the opposite direction, one can show that if A is diagonalizable, then it has n
linearly independent eigenvectors in Rn . It therefore follows that the question of diagonalizing a matrix
A with real entries is reduced to one of linear independence of its eigenvectors.
We now summarize our discussion so far.
Chapter 10 : Orthogonal Matrices page 4 of 10 c Linear Algebra W W L Chen, 1997, 2005 DIAGONALIZATION PROCESS. Suppose that A is an n × n matrix with real entries.
(1) Determine whether the n roots of the characteristic polynomial det(A − λI ) are real.
(2) If not, then A is not diagonalizable. If so, then ﬁnd the eigenvectors corresponding to these eigenvalues. Determine whether we can ﬁnd n linearly independent eigenvectors.
(3) If not, then A is not diagonalizable. If so, then write P = ( v1 ... vn ) and D= λ1
.. , .
λn where λ1 , . . . , λn ∈ R are the eigenvalues of A and where v1 , . . . , vn ∈ Rn are respectively their
corresponding eigenvectors. Then P −1 AP = D.
In particular, it can be shown that if A has distinct eigenvalues λ1 , . . . , λn ∈ R, with corresponding
eigenvectors v1 , . . . , vn ∈ Rn , then v1 , . . . , vn are linearly independent. It follows that all such matrices
A are diagonalizable. 10.3. Orthonormal Diagonalization
We now consider the euclidean space Rn an as inner product space with the euclidean inner product.
Given any n × n matrix A with real entries, we wish to ﬁnd out whether there exists an orthonormal
basis of Rn consisting of eigenvectors of A.
Recall that in the Diagonalization process discussed in the last section, the columns of the matrix P
are eigenvectors of A, and these vectors form a basis of Rn . It follows from Proposition 10B that this
basis is orthonormal if and only if the matrix P is orthogonal.
Definition. An n × n matrix A with real entries is said to be orthogonally diagonalizable if there exists
an orthogonal matrix P with real entries such that P −1 AP = P t AP is a diagonal matrix with real
entries.
First of all, we would like to determine which matrices are orthogonally diagonalizable. For those that
are, we then need to discuss how we may ﬁnd an orthogonal matrix P to carry out the diagonalization.
To study the ﬁrst question, we have the following result which gives a restriction on those matrices
that are orthogonally diagonalizable.
PROPOSITION 10D. Suppose that A is a orthogonally diagonalizable matrix with real entries. Then
A is symmetric.
Proof. Suppose that A is orthogonally diagonalizable. Then there exists an orthogonal matrix P and
a diagonal matrix D, both with real entries and such that P t AP = D. Since P P t = P t P = I and
Dt = D, we have
A = P DP t = P Dt P t ,
so that
At = (P Dt P t )t = (P t )t (Dt )t P t = P DP t = A,
whence A is symmetric.
Our ﬁrst question is in fact answered by the following result which we state without proof.
Chapter 10 : Orthogonal Matrices page 5 of 10 c Linear Algebra W W L Chen, 1997, 2005 PROPOSITION 10E. Suppose that A is an n × n matrix with real entries. Then it is orthogonally
diagonalizable if and only if it is symmetric.
The remainder of this section is devoted to ﬁnding a way to orthogonally diagonalize a symmetric
matrix with real entries. We begin by stating without proof the following result. The proof requires
results from the theory of complex vector spaces.
PROPOSITION 10F. Suppose that A is a symmetric matrix with real entries. Then all the eigenvalues
of A are real.
Our idea here is to follow the Diagonalization process discussed in the last section, knowing that since
A is diagonalizable, we shall ﬁnd a basis of Rn consisting of eigenvectors of A. We may then wish to
orthogonalize this basis by the GramSchmidt process. This last step is considerably simpliﬁed in view
of the following result.
PROPOSITION 10G. Suppose that u1 and u2 are eigenvectors of a symmetric matrix A with real
entries, corresponding to distinct eigenvalues λ1 and λ2 respectively. Then u1 · u2 = 0. In other words,
eigenvectors of a symmetric real matrix corresponding to distinct eigenvalues are orthogonal.
Proof. Note that if we write u1 and u2 as column matrices, then since A is symmetric, we have
Au1 · u2 = ut Au1 = ut At u1 = (Au2 )t u1 = u1 · Au2 .
2
2
It follows that
λ1 u1 · u2 = Au1 · u2 = u1 · Au2 = u1 · λ2 u2 ,
so that (λ1 − λ2 )(u1 · u2 ) = 0. Since λ1 = λ2 , we must have u1 · u2 = 0.
We can now follow the procedure below.
ORTHOGONAL DIAGONALIZATION PROCESS. Suppose that A is a symmetric n × n matrix
with real entries.
(1) Determine the n real roots λ1 , . . . , λn of the characteristic polynomial det(A − λI ), and ﬁnd n linearly
independent eigenvectors u1 , . . . , un of A corresponding to these eigenvalues as in the Diagonalization process.
(2) Apply the GramSchmidt orthogonalization process to the eigenvectors u1 , . . . , un to obtain orthogonal eigenvectors v1 , . . . , vn of A, noting that eigenvectors corresponding to distinct eigenvalues are
already orthogonal.
(3) Normalize the orthogonal eigenvectors v1 , . . . , vn to obtain orthonormal eigenvectors w1 , . . . , wn of
A. These form an orthonormal basis of Rn . Furthermore, write P = ( w1 ... wn ) and D= λ1
.. , .
λn where λ1 , . . . , λn ∈ R are the eigenvalues of A and where w1 , . . . , wn ∈ Rn are respectively their
orthogonalized and normalized eigenvectors. Then P t AP = D.
Remark. Note that if we apply the GramSchmidt orthogonalization process to eigenvectors corresponding to the same eigenvalue, then the new vectors that result from this process are also eigenvectors
corresponding to this eigenvalue. Why?
Chapter 10 : Orthogonal Matrices page 6 of 10 c Linear Algebra W W L Chen, 1997, 2005 Example 10.3.1. Consider the matrix 2
A = 2
1 1
2.
2 2
5
2 To ﬁnd the eigenvalues of A, we need to ﬁnd the roots of 2−λ
det 2
1 2
5−λ
2 1
2 = 0;
2−λ in other words, (λ − 7)(λ − 1)2 = 0. The eigenvalues are therefore λ1 = 7 and (double root) λ2 = λ3 = 1.
An eigenvector corresponding to λ1 = 7 is a solution of the system −5
(A − 7I )u = 2
1 2
−2
2 1
2 u = 0,
−5 1
u1 = 2 .
1 with root Eigenvectors corresponding to λ2 = λ3 = 1 are solutions of the system 1
(A − I )u = 2
1 1
2 u = 0,
1 2
4
2 with roots 1
u2 = 0 −1 2
u3 = −1 0 and which are linearly independent. Next, we apply the GramSchmidt orthogonalization process to u2 and
u3 , and obtain 1
1
v2 = 0 and
v3 = −1 −1
1
which are now orthogonal to each other. Note that we do not have to do anything to u1 at this stage,
in view of Proposition 10G. We now conclude that 1
v1 = 2 ,
1 1
v2 = 0 ,
−1 1
v3 = −1 1 form an orthogonal basis of R3 . Normalizing each of these, we obtain respectively
√
1/√6
w1 = 2/√6 ,
1/ 6 √
1/ 2
w2 = 0√ ,
−1/ 2 √
1/ √
3
w3 = −1/ 3 .
√
1/ 3 We now take
√
1/√6
w3 ) = 2/√6
1/ 6 P = ( w1 w2 √
√
1/ 2
1/ √
3
0√
−1/ 3 .
√
−1/ 2 1/ 3 Then
√
1/√6
= P t = 1/√2
1/ 3 P −1 Chapter 10 : Orthogonal Matrices √
√
2/ 6
1/ √
6
0√
−1/ 2 √
−1/ 3 1/ 3 and 7
P t AP = 0
0 0
1
0 0
0.
1
page 7 of 10 c Linear Algebra W W L Chen, 1997, 2005 Example 10.3.2. Consider the matrix −1
6
A = 0 −13
0
−9 −12
30 .
20 To ﬁnd the eigenvalues of A, we need to ﬁnd the roots of −1 − λ
6
−12
det 0
−13 − λ
30 = 0;
0
−9
20 − λ
in other words, (λ + 1)(λ − 2)(λ − 5) = 0. The eigenvalues are therefore λ1 = −1, λ2 = 2 and λ3 = 5.
An eigenvector corresponding λ1 = −1 is a solution of the system 0
6
−12
1
(A + I )u = 0 −12 30 u = 0,
with root
u1 = 0 .
0 −9
21
0
An eigenvector corresponding to λ2 = 2 is −3
6
(A − 2I )u = 0 −15
0
−9 a solution of the system −12
30 u = 0,
with root
18 An eigenvector corresponding to λ3 = 5 is a solution of the system −6
6
−12
(A − 5I )u = 0 −18 30 u = 0,
with root
0
−9
15 0
u2 = 2 .
1 1
u3 = −5 .
−3 Note that while u1 , u2 , u3 correspond to distinct eigenvalues of A, they are not orthogonal. The matrix
A is not symmetric, and so Proposition 10G does not apply in this case.
Example 10.3.3. Consider the matrix 5
A = −2
0 −2
6
2 0
2.
7 To ﬁnd the eigenvalues of A, we need to ﬁnd the roots of 5−λ
−2
0
det −2
6−λ
2 = 0;
0
2
7−λ
in other words, (λ − 3)(λ − 6)(λ − 9) = 0. The eigenvalues are therefore λ1 = 3, λ2 = 6 and λ3 = 9. An
eigenvector corresponding λ1 = 3 is a solution of the system 2 −2 0
2
(A − 3I )u = −2 3 2 u = 0,
with root
u1 = 2 .
0
24
−1
An eigenvector corresponding to λ2 = 6 is −1 −2
(A − 6I )u = −2 0
0
2
Chapter 10 : Orthogonal Matrices a solution of the system 0
2 u = 0,
with root
1 2
u2 = −1 .
2
page 8 of 10 c Linear Algebra W W L Chen, 1997, 2005 An eigenvector corresponding to λ3 = 9 is a solution of the system
−4
(A − 9I )u = −2
0 −2
−3
2 0
2 u = 0,
−2 −1
u3 = 2 .
2 with root Note now that the eigenvalues are distinct, so it follows from Proposition 10G that u1 , u2 , u3 are orthogonal, so we do not have to apply Step (2) of the Orthogonal diagonalization process. Normalizing
each of these vectors, we obtain respectively 2/3
w1 = 2/3 ,
−1/3 2/3
w2 = −1/3 ,
2/3 −1/3
w3 = 2/3 .
2/3 We now take P = ( w1 w2 2/3
w3 ) = 2/3
−1/3 −1/3
2/3 .
2/3 2/3
−1/3
2/3 Then P −1 2/3
2/3
= P t = 2/3 −1/3
−1/3 2/3 −1/3
2/3 2/3 and 3
P t AP = 0
0 0
6
0 0
0.
9 Problems for Chapter 10
1. Prove Proposition 10B(b).
2. Let A = a+b b−a
, where a, b ∈ R. Determine when A is orthogonal.
a−b b+a 3. Suppose that A is an orthogonal matrix with real entries. Prove that
a) A−1 is an orthogonal matrix; and
b) det A = ±1.
4. Suppose that A and B are orthogonal matrices with real entries. Prove that AB is orthogonal.
5. Verify that for every a ∈ R, the matrix 1
1 2a
A=
1 + 2a2
2a2 −2a
1 − 2a2
2a 2a2
−2a 1 is orthogonal.
6. Suppose that λ is an eigenvalue of an orthogonal matrix A with real entries. Prove that 1/λ is also
an eigenvalue of A.
Chapter 10 : Orthogonal Matrices page 9 of 10 c Linear Algebra W W L Chen, 1997, 2005 7. Suppose that
ab
cd A= is an orthogonal matrix with real entries. Explain why a2 + b2 = c2 + d2 = 1 and ac + bd = 0, and
quote clearly any result that you use. Deduce that A has one of the two possible forms
A= cos θ
sin θ − sin θ
cos θ or A= cos θ
− sin θ − sin θ
− cos θ , where θ ∈ [0, 2π ).
8. Consider the matrix 1
√
A = − 6
√
3
a)
b)
c)
d)
e) √
√
− 6 √3
2
2.
√
2
3 Find the characteristic polynomial of A and show that A has eigenvalues 4 (twice) and −2.
Find an eigenvector of A corresponding to the eigenvalue −2.
Find two orthogonal eigenvectors of A corresponding to the eigenvalue 4.
Find an orthonormal basis of R3 consisting of eigenvectors of A.
Using the orthonormal basis in part (d), ﬁnd a matrix P such that P t AP is a diagonal matrix. 9. Apply the Orthogonal diagonalization process to each of the following matrices: 50
6
020
a) A = 0 11 6 b) A = 2 0 1 6 6 −2
010 1 −4 2
2 0 36
c) A = −4 1 −2 d) A = 0 3 0 2 −2 −2
36 0 23 1100
−7 24 0
0
0
0
1 1 0 0 24 7
e) A = f) A = 0000
0
0 −7 24
0000
0
0 24 7
10. Suppose that B is an m × n matrix with real entries. Prove that the matrix A = B t B has an
orthonormal set of n eigenvectors. Chapter 10 : Orthogonal Matrices page 10 of 10 ...
View
Full
Document
 Fall '08
 PETRINA
 Linear Algebra, Matrices, W W L Chen, real entries

Click to edit the document details