This preview shows page 1. Sign up to view the full content.
Unformatted text preview: LINEAR ALGEBRA
W W L CHEN
c W W L Chen, 1982, 2005. This chapter originates from material used by the author at Imperial College, University of London, between 1981 and 1990.
It is available free to all individuals, on the understanding that it is not to be used for ﬁnancial gain,
and may be downloaded and/or photocopied, with or without permission from the author.
However, this document may not be kept on any information storage and retrieval system without permission
from the author, unless such system is not accessible to any individuals other than its owners. Chapter 2
MATRICES 2.1. Introduction
A rectangular array of numbers of the form a11
.
.
. (1) ... am1 a1n
.
.
. . . . amn is called an m × n matrix, with m rows and n columns. We count rows from the top and columns from
the left. Hence a1j
.
( ai1 . . . ain )
and
.
.
amj
represent respectively the ith row and the j th column of the matrix (1), and aij represents the entry
in the matrix (1) on the ith row and j th column.
Example 2.1.1. Consider the 3 × 4 matrix 2
3
−1 4 3 −1
1 5 2 .
07 6 Here
(3 Chapter 2 : Matrices 1 5 2) and 3
5
7
page 1 of 19 c Linear Algebra W W L Chen, 1982, 2005 represent respectively the 2nd row and the 3rd column of the matrix, and 5 represents the entry in the
matrix on the 2nd row and 3rd column.
We now consider the question of arithmetic involving matrices. First of all, let us study the problem
of addition. A reasonable theory can be derived from the following deﬁnition.
Definition. Suppose that the two matrices a11
.
A= .
.
am1 ... a1n
.
.
. ... and . . . amn b1n
.
.
. ... b11
.
B= .
.
bm1 bmn both have m rows and n columns. Then we write a11 + b11
. .
A+B =
.
am1 + bm1 ... a1n + b1n
. .
. . . . amn + bmn and call this the sum of the two matrices A and B .
Example 2.1.2. Suppose that 2
A= 3
−1 4 3 −1
1 5 2
07 6 and 1
B= 0
−2 −2
4
3 2
2
1 7
−1 .
3 Then 2 + 1 4 + 2 3 − 2 −1 + 7
3
A+B = 3+0 1+2 5+4 2−1 = 3
−1 − 2 0 + 1 7 + 3 6 + 3
−3 61
39
1 10 6
1.
9 Example 2.1.3. We do not have a deﬁnition for “adding” the matrices 2 4 3 −1
−1 0 7 6 and 2
3
−1 4
1
0 3
5.
7 PROPOSITION 2A. (MATRIX ADDITION) Suppose that A, B, C are m × n matrices. Suppose
further that O represents the m × n matrix with all entries zero. Then
(a) A + B = B + A;
(b) A + (B + C ) = (A + B ) + C ;
(c) A + O = A; and
(d) there is an m × n matrix A such that A + A = O.
Proof. Parts (a)–(c) are easy consequences of ordinary addition, as matrix addition is simply entrywise
addition. For part (d), we can consider the matrix A obtained from A by multiplying each entry of A
by −1.
The theory of multiplication is rather more complicated, and includes multiplication of a matrix by a
scalar as well as multiplication of two matrices.
We ﬁrst study the simpler case of multiplication by scalars.
Chapter 2 : Matrices page 2 of 19 c Linear Algebra W W L Chen, 1982, 2005 Definition. Suppose that the matrix a11
.
A= .
. a1n
.
.
. ... am1 . . . amn has m rows and n columns, and that c ∈ R. Then we write ca11
.
cA = .
. ca1n
.
.
. ... cam1 . . . camn and call this the product of the matrix A by the scalar c.
Example 2.1.4. Suppose that 2
A= 3
−1 −1
2 .
6 4
1
0 3
5
7 8
2
0 6
10
14 Then 4
2A = 6
−2 −2
4 .
12 PROPOSITION 2B. (MULTIPLICATION BY SCALAR) Suppose that A, B are m × n matrices, and
that c, d ∈ R. Suppose further that O represents the m × n matrix with all entries zero. Then
(a) c(A + B ) = cA + cB ;
(b) (c + d)A = cA + dA;
(c) 0A = O; and
(d) c(dA) = (cd)A.
Proof. These are all easy consequences of ordinary multiplication, as multiplication by scalar c is simply
entrywise multiplication by the number c.
The question of multiplication of two matrices is rather more complicated. To motivate this, let us
consider the representation of a system of linear equations
a11 x1 + . . . + a1n xn = b1 ,
.
.
.
am1 x1 + . . . + amn xn = bm , (2) in the form Ax = b, where (3) a11
.
.
A=
.
am1 ... a1n
.
.
. and . . . amn b1
.
b= .
.
bm represent the coeﬃcients and (4) x1
.
x= .
.
xn Chapter 2 : Matrices page 3 of 19 c Linear Algebra W W L Chen, 1982, 2005 represents the variables. This can be written in full matrix notation by a11
.
.
. ... x1
a1n
. . .
.
=
.
. am1 ... amn xn b1
.
.
.
.
bm Can you work out the meaning of this representation?
Now let us deﬁne matrix multiplication more formally.
Definition. Suppose that a11
.
A= .
.
am1 b11
.
B= .
. ... a1p
.
.
. b n1 a1n
.
.
. ... ... bnp and . . . amn are respectively an m × n matrix and an n × p matrix. Then the matrix product AB is given by the
m × p matrix q11 . . . q1p
.
. ,
.
AB = .
.
.
qm1 . . . qmp
where for every i = 1, . . . , m and j = 1, . . . , p, we have
n qij = aik bkj = ai1 b1j + . . . + ain bnj .
k=1 Remark. Note ﬁrst of all that the number of columns of the ﬁrst matrix must be equal to the number
of rows of the second matrix. On the other hand, for a simple way to work out qij , the entry in the ith
row and j th column of AB , we observe that the ith row of A and the j th column of B are respectively b1j
. . .
.
bnj ( ai1 ... ain ) and We now multiply the corresponding entries – from ai1 with b1j , and so on, until ain with bnj – and then
add these products to obtain qij .
Example 2.1.5. Consider the matrices 2 4 3 −1
A= 3 1 5 2 −1 0 7 6 and 1
2
B=
0
3 4
3
.
−2
1 Note that A is a 3 × 4 matrix and B is a 4 × 2 matrix, so that the product AB is a 3 × 2 matrix. Let
us calculate the product q11 q12
AB = q21 q22 .
q31 q32
Chapter 2 : Matrices page 4 of 19 c Linear Algebra W W L Chen, 1982, 2005 Consider ﬁrst of all q11 . To calculate this, we need the 1st row of A and the 1st column of B , so let us
cover up all unnecessary information, so that 2
×
× 4
×
× 1
3 −1
2
× × 0
××
3 ×
q
× 11
×
=
×
×
× ×
×.
× From the deﬁnition, we have
q11 = 2 · 1 + 4 · 2 + 3 · 0 + (−1) · 3 = 2 + 8 + 0 − 3 = 7.
Consider next q12 . To calculate this, we need the 1st row of A and the 2nd column of B , so let us cover
up all unnecessary information, so that 2
×
× ×
3 −1
×
× × ×
××
× 4
×
× 4
× q12
3 = × × .
−2
××
1 From the deﬁnition, we have
q12 = 2 · 4 + 4 · 3 + 3 · (−2) + (−1) · 1 = 8 + 12 − 6 − 1 = 13.
Consider next q21 . To calculate this, we need the 2nd row of A and the 1st column of B , so let us cover
up all unnecessary information, so that ×
3
× ×
1
× ×
5
× 1
×
2
2 0
×
3 ×
×
× = q21
×
×
× ×
×.
× From the deﬁnition, we have
q21 = 3 · 1 + 1 · 2 + 5 · 0 + 2 · 3 = 3 + 2 + 0 + 6 = 11.
Consider next q22 . To calculate this, we need the 2nd row of A and the 2nd column of B , so let us
cover up all unnecessary information, so that ×
3
× ×
1
× ×
5
× ×
×
×
2 ×
×
× 4
××
3 = × q22 .
−2
××
1 From the deﬁnition, we have
q22 = 3 · 4 + 1 · 3 + 5 · (−2) + 2 · 1 = 12 + 3 − 10 + 2 = 7.
Consider next q31 . To calculate this, we need the 3rd row of A and the 1st column of B , so let us cover
up all unnecessary information, so that ×
×
−1 ××
××
07 1
×
2
×
0
6
3 ×
×
× ×
=
×
q31
× ×
×.
× From the deﬁnition, we have
q31 = (−1) · 1 + 0 · 2 + 7 · 0 + 6 · 3 = −1 + 0 + 0 + 18 = 17.
Chapter 2 : Matrices page 5 of 19 c Linear Algebra W W L Chen, 1982, 2005 Consider ﬁnally q32 . To calculate this, we need the 3rd row of A and the 2nd column of B , so let us
cover up all unnecessary information, so that ×
×
−1 ××
××
07 ×
×
×
×
×
6
× 4
××
3 = × × .
−2
× q32
1 From the deﬁnition, we have
q32 = (−1) · 4 + 0 · 3 + 7 · (−2) + 6 · 1 = −4 + 0 + −14 + 6 = −12.
We therefore conclude that 2
AB = 3
−1 4
1
0 1
3 −1
2
5 2 0
76
3 4
7
3 = 11 −2
17
1 13
7 .
−12 4
3
.
−2
1 Example 2.1.6. Consider again the matrices 2 43
A= 3 1 5
−1 0 7 −1
2
6 and 1
2
B=
0
3 Note that B is a 4 × 2 matrix and A is a 3 × 4 matrix, so that we do not have a deﬁnition for the
“product” BA.
We leave the proofs of the following results as exercises for the interested reader.
PROPOSITION 2C. (ASSOCIATIVE LAW) Suppose that A is an m × n matrix, B is an n × p matrix
and C is an p × r matrix. Then A(BC ) = (AB )C .
PROPOSITION 2D. (DISTRIBUTIVE LAWS)
(a) Suppose that A is an m × n matrix and B and C are n × p matrices. Then A(B + C ) = AB + AC .
(b) Suppose that A and B are m × n matrices and C is an n × p matrix. Then (A + B )C = AC + BC .
PROPOSITION 2E. Suppose that A is an m × n matrix, B is an n × p matrix, and that c ∈ R. Then
c(AB ) = (cA)B = A(cB ). 2.2. Systems of Linear Equations
Note that the system (2) of linear equations can be written in matrix form as
Ax = b,
where the matrices A, x and b are given by (3) and (4). In this section, we shall establish the following
important result.
PROPOSITION 2F. Every system of linear equations of the form (2) has either no solution, one
solution or inﬁnitely many solutions.
Chapter 2 : Matrices page 6 of 19 c Linear Algebra W W L Chen, 1982, 2005 Proof. Clearly the system (2) has either no solution, exactly one solution, or more than one solution.
It remains to show that if the system (2) has two distinct solutions, then it must have inﬁnitely many
solutions. Suppose that x = u and x = v represent two distinct solutions. Then
Au = b and Av = b, so that
A(u − v) = Au − Av = b − b = 0,
where 0 is the zero m × 1 matrix. It now follows that for every c ∈ R, we have
A(u + c(u − v)) = Au + A(c(u − v)) = Au + c(A(u − v)) = b + c0 = b,
so that x = u + c(u − v) is a solution for every c ∈ R. Clearly we have inﬁnitely many solutions. 2.3. Inversion of Matrices
For the remainder of this chapter, we shall deal with square matrices, those where the number of rows
equals the number of columns.
Definition. The n × n matrix a11
.
.
In =
.
an1 ... a1n
.
.
,
. . . . ann where
aij = 1
0 if i = j ,
if i = j , is called the identity matrix of order n.
Remark. Note that I1 = ( 1 ) and 1
0
I4 = 0
0 0
1
0
0 0
0
1
0 0
0
.
0
1 The following result is relatively easy to check. It shows that the identity matrix In acts as the identity
for multiplication of n × n matrices.
PROPOSITION 2G. For every n × n matrix A, we have AIn = In A = A.
This raises the following question: Given an n × n matrix A, is it possible to ﬁnd another n × n matrix
B such that AB = BA = In ?
We shall postpone the full answer to this question until the next chapter. In Section 2.5, however, we
shall be content with ﬁnding such a matrix B if it exists. In Section 2.6, we shall relate the existence of
such a matrix B to some properties of the matrix A.
Chapter 2 : Matrices page 7 of 19 c Linear Algebra W W L Chen, 1982, 2005 Definition. An n × n matrix A is said to be invertible if there exists an n × n matrix B such that
AB = BA = In . In this case, we say that B is the inverse of A and write B = A−1 .
PROPOSITION 2H. Suppose that A is an invertible n × n matrix. Then its inverse A−1 is unique.
Proof. Suppose that B satisﬁes the requirements for being the inverse of A. Then AB = BA = In . It
follows that
A−1 = A−1 In = A−1 (AB ) = (A−1 A)B = In B = B.
Hence the inverse A−1 is unique.
PROPOSITION 2J. Suppose that A and B are invertible n × n matrices. Then (AB )−1 = B −1 A−1 .
Proof. In view of the uniqueness of inverse, it is suﬃcient to show that B −1 A−1 satisﬁes the requirements for being the inverse of AB . Note that
(AB )(B −1 A−1 ) = A(B (B −1 A−1 )) = A((BB −1 )A−1 ) = A(In A−1 ) = AA−1 = In
and
(B −1 A−1 )(AB ) = B −1 (A−1 (AB )) = B −1 ((A−1 A)B ) = B −1 (In B ) = B −1 B = In
as required.
PROPOSITION 2K. Suppose that A is an invertible n × n matrix. Then (A−1 )−1 = A.
Proof. Note that both (A−1 )−1 and A satisfy the requirements for being the inverse of A−1 . Equality
follows from the uniqueness of inverse. 2.4. An Application
In this section, we shall discuss an application of invertible matrices. Detailed discussion of the technique
involved will be covered in Chapter 7.
Definition. An n × n matrix a11
.
.
A=
. ... a1n
.
.
,
. an1 ... ann where aij = 0 whenever i = j , is called a diagonal matrix of order n.
Example 2.4.1. The 3 × 3 matrices 10
0 2
00 0
0
0 0
0
0 and 0
0
0 0
0
0 are both diagonal.
Given an n × n matrix A, it is usually rather complicated to calculate
Ak = A . . . A .
k However, the calculation is rather simple when A is a diagonal matrix, as we shall see in the following
example.
Chapter 2 : Matrices page 8 of 19 c Linear Algebra Example 2.4.2. Consider the 3 × 3 matrix 17
A = 45
−30 W W L Chen, 1982, 2005 −5
−15 .
12 −10
−28
20 Suppose that we wish to calculate A98 . It can be checked that if we take 1
P = 3
−2 1
0
3 2
3,
0 then P −1 −3
= −2
3 1
1 .
−1 2
4/3
−5/3 Furthermore, if we write −3
D= 0
0 0
2
0 0
0,
2 then it can be checked that A = P DP −1 , so that A98 = (P DP −1 ) . . . (P DP −1 ) = P D98 P −1
98 398
=P 0
0 0
298
0 0
0 P −1 .
298 This is much simpler than calculating A98 directly. Note that this example is only an illustration. We
have not discussed here how the matrices P and D are found. 2.5. Finding Inverses by Elementary Row Operations
In this section, we shall discuss a technique by which we can ﬁnd the inverse of a square matrix, if the
inverse exists. Before we discuss this technique, let us recall the three elementary row operations we
discussed in the previous chapter. These are: (1) interchanging two rows; (2) adding a multiple of one
row to another row; and (3) multiplying one row by a nonzero constant.
Let us now consider the following example.
Example 2.5.1. Consider the matrices a11 a12
A = a21 a22
a31 a32 a13
a23 a33 and 1
I3 = 0
0 0
1
0 0
0.
1 • Let us interchange rows 1 and 2 of A and do likewise for I3 . We obtain respectively a21 a11
a31
Chapter 2 : Matrices a22
a12
a32 a23
a13 a33 and 0
1
0 1
0
0 0
0.
1
page 9 of 19 c Linear Algebra W W L Chen, 1982, 2005 Note that a21 a11
a31 a22
a12
a32 • Let us interchange rows 2 and 3 a11 a31
a21 0
a23
a13 = 1
0
a33 0
a11
0 a21
1
a31 1
0
0 a12
a22
a32 a13
a23 .
a33 of A and do likewise for I3 . We obtain respectively a12 a13
100
0 0 1.
and
a32 a33 010
a22 a23 Note that a11 a31
a21 a12
a32
a22 1
a13
a33 = 0
0
a23 0
a11
1 a21
0
a31 0
0
1 a12
a22
a32 a13
a23 .
a33 • Let us add 3 times row 1 to row 2 of A and do likewise for I3 . We obtain respectively a11
a12
a13
100 3a11 + a21 3a12 + a22 3a13 + a23 3 1 0.
and
001
a31
a32
a33
Note that a11 3a11 + a21
a31 a12
3a12 + a22
a32 a13
1
3a13 + a23 = 3
0
a33 0
a11
0 a21
1
a31 0
1
0 a13
a23 .
a33 a12
a22
a32 • Let us add −2 times row 3 to row 1 of A and do likewise for I3 . We obtain respectively −2a31 + a11 −2a32 + a12 −2a33 + a13
1 0 −2 0 1 0 .
and
a21
a22
a23
00 1
a31
a32
a33
Note that −2a31 + a11 a21
a31 −2a32 + a12
a22
a32 • Let us multiply row 2 of A by 5 a11 5a21
a31 1
−2a33 + a13 = 0
a23
0
a33 0
1
0 −2
a11
0 a21
1
a31 a12
a22
a32 a13
a23 .
a33 and do likewise for I3 . We obtain respectively a12
a13
100
0 5 0.
and
5a22 5a23 001
a32
a33 Note that a11 5a21
a31 a12
5a22
a32 a13
1
5a23 = 0
0
a33 0
5
0 0
a11
0 a21
1
a31 a12
a22
a32 a13
a23 .
a33 • Let us multiply row 3 of A by −1 and do likewise for I3 . We obtain respectively 10 0
a11
a12
a13
0 1 0 . a21
and
a22
a23 0 0 −1
−a31 −a32 −a33
Chapter 2 : Matrices page 10 of 19 c Linear Algebra W W L Chen, 1982, 2005 Note that a11 a21
−a31 a12
a22
−a32 1
a13
a23 = 0
0
−a33 0
1
0 0
a11
0 a21
−1
a31 a12
a22
a32 a13
a23 .
a33 Let us now consider the problem in general.
Definition. By an elementary n × n matrix, we mean an n × n matrix obtained from In by an elementary
row operation.
We state without proof the following important result. The interested reader may wish to construct
a proof, taking into account the diﬀerent types of elementary row operations.
PROPOSITION 2L. Suppose that A is an n × n matrix, and suppose that B is obtained from A by
an elementary row operation. Suppose further that E is an elementary matrix obtained from In by the
same elementary row operation. Then B = EA.
We now adopt the following strategy. Consider an n × n matrix A. Suppose that it is possible to reduce
the matrix A by a sequence α1 , α2 , . . . , αk of elementary row operations to the identity matrix In . If
E1 , E2 , . . . , Ek are respectively the elementary n × n matrices obtained from In by the same elementary
row operations α1 , α2 . . . , αk , then
In = Ek . . . E2 E1 A.
We therefore must have
A−1 = Ek . . . E2 E1 = Ek . . . E2 E1 In .
It follows that the inverse A−1 can be obtained from In by performing the same elementary row operations
α1 , α2 , . . . , αk . Since we are performing the same elementary row operations on A and In , it makes sense
to put them side by side. The process can then be described pictorially by
α (AIn ) − − (E1 AE1 In )
− 1→
α − − (E2 E1 AE2 E1 In )
− 2→
α − − ...
− 3→
α − − (Ek . . . E2 E1 AEk . . . E2 E1 In ) = (In A−1 ).
− k→
In other words, we consider an array with the matrix A on the left and the matrix In on the right. We
now perform elementary row operations on the array and try to reduce the left hand half to the matrix
In . If we succeed in doing so, then the right hand half of the array gives the inverse A−1 .
Example 2.5.2. Consider the matrix 1
A= 3
−2 1
0
3 2
3.
0 To ﬁnd A−1 , we consider the array 1
(AI3 ) = 3
−2
Chapter 2 : Matrices 12100
0 3 0 1 0.
30001
page 11 of 19 c Linear Algebra W W L Chen, 1982, 2005 We now perform elementary row operations on this array and try to reduce the left hand half to the
matrix I3 . Note that if we succeed, then the ﬁnal array is clearly in reduced row echelon form. We
therefore follow the same procedure as reducing an array to reduced row echelon form. Adding −3 times
row 1 to row 2, we obtain 1
1
2
1 00 0 −3 −3 −3 1 0 .
−2 3
0
0 01
Adding 2 times row 1 to row 3, we obtain 1
0
0 0
1
0 0
0.
1 1
−3
5 2
−3
4 1
−3
2 1
−3
15 2
−3
12 1 00
−3 1 0 .
6 03 1
−3
0 2
−3
−3 1
−3
−9 33 0 −3
00 6
−3
−3 3 00
−3 1 0 .
−9 5 3 Multiplying row 3 by 3, we obtain 1
0
0
Adding 5 times row 2 to row 3, we obtain 1
0
0 0
1
5 0
0.
3 Multiplying row 1 by 3, we obtain Adding 2 times row 3 to row 1, we obtain 33 0 −3
00 0
−3
−3 −15
−3
−9 10
1
5 6
0.
3 Adding −1 times row 3 to row 2, we obtain 3
0
0 3
−3
0 0
0
−3 −15
6
−9 10
−4
5 6
−3 .
3 0
0
−3 −9 6
6 −4
−9 5 3
−3 .
3 −3
6
−9 1
−3 .
3 Adding 1 times row 2 to row 1, we obtain 30 0 −3
00
Multiplying row 1 by 1/3, we obtain 10
0 0 −3 0
0 0 −3
Chapter 2 : Matrices 2
−4
5 page 12 of 19 c Linear Algebra W W L Chen, 1982, 2005 Multiplying row 2 by −1/3, we obtain 10
0 1
00 0
0
−3 −3
−2
−9 1
1.
3 2
4/3
5 Multiplying row 3 by −1/3, we obtain −3
2
−2 4/3
3 −5/3 100
0 1 0
001 1
1 .
−1 Note now that the array is in reduced row echelon form, and that the left hand half is the identity matrix
I3 . It follows that the right hand half of the array represents the inverse A−1 . Hence A−1 −3
= −2
3 1
1 .
−1 2
4/3
−5/3 Example 2.5.3. Consider the matrix 1
2
A=
0
0 1
2
3
0 3
5
.
0
1 2
4
0
0 To ﬁnd A−1 , we consider the array 1
2
(AI4 ) = 0
0 1
2
3
0 2
4
0
0 3
5
0
1 1
0
0
0 0
1
0
0 0
0
1
0 0
0
.
0
1 We now perform elementary row operations on this array and try to reduce the left hand half to the
matrix I4 . Adding −2 times row 1 to row 2, we obtain 1
0 0
0 1
0
3
0 2
0
0
0 3
−1
0
1 1
−2
0
0 0
1
0
0 0
0
1
0 0
0
.
0
1 2
0
0
0 3
−1
0
0 1
−2
0
−2 0
1
0
1 0
0
1
0 0
0
.
0
1 2
0
0
0 3
0
−1
0 1
0
−2
−2 0
0
1
1 0
1
0
0 0
0
.
0
1 Adding 1 times row 2 to row 4, we obtain 11
0 0 03
00
Interchanging rows 2 and 3, we obtain 1
0 0
0
Chapter 2 : Matrices 1
3
0
0 page 13 of 19 c Linear Algebra W W L Chen, 1982, 2005 At this point, we observe that it is impossible to reduce the left hand half of the array to I4 . For those
who remain unconvinced, let us continue. Adding 3 times row 3 to row 1, we obtain 1
0 0
0 1
3
0
0 2
0
0
0 0
0
−1
0 −5
0
−2
−2 3
0
1
1 0
1
0
0 0
0
.
0
1 Adding −1 times row 4 to row 3, we obtain 1
0 0
0 1
3
0
0 2
0
0
0 0
0
−1
0 −5
0
0
−2 30 0
0 1 0
.
0 0 −1
10 1 Multiplying row 1 by 6 (here we want to avoid fractions in the next two steps), we obtain 66
0 3 00
00 12
0
0
0 0
0
−1
0 −30
0
0
−2 18
0
0
1 0
0
−1
0 0
0
0
−2 3
0
0
1 00
1 0
.
0 −1
01 Adding −15 times row 4 to row 1, we obtain 6
0 0
0 6
3
0
0 12
0
0
0 0 −15
1
0
.
0 −1
0
1 Adding −2 times row 2 to row 1, we obtain 60
0 3 00
00 12
0
0
0 0
0
−1
0 0
0
0
−2 3
0
0
1 −2
1
0
0 −15
0
.
−1
1 Multiplying row 1 by 1/6, multiplying row 2 by 1/3, multiplying row 3 by −1 and multiplying row 4 by
−1/2, we obtain 10200
0 1 0 0 0 00010
00001 1/2
0
0
−1/2 −1/3 −5/2
1/3
0
.
0
1
0
−1/2 Note now that the array is in reduced row echelon form, and that the left hand half is not the identity
matrix I4 . Our technique has failed. In fact, the matrix A is not invertible. 2.6. Criteria for Invertibility
Examples 2.5.2–2.5.3 raise the question of when a given matrix is invertible. In this section, we shall
obtain some partial answers to this question. Our ﬁrst step here is the following simple observation.
PROPOSITION 2M. Every elementary matrix is invertible.
Proof. Let us consider elementary row operations. Recall that these are: (1) interchanging two rows;
(2) adding a multiple of one row to another row; and (3) multiplying one row by a nonzero constant.
Chapter 2 : Matrices page 14 of 19 c Linear Algebra W W L Chen, 1982, 2005 These elementary row operations can clearly be reversed by elementary row operations. For (1), we
interchange the two rows again. For (2), if we have originally added c times row i to row j , then we can
reverse this by adding −c times row i to row j . For (3), if we have multiplied any row by a nonzero
constant c, we can reverse this by multiplying the same row by the constant 1/c. Note now that each
elementary matrix is obtained from In by an elementary row operation. The inverse of this elementary
matrix is clearly the elementary matrix obtained from In by the elementary row operation that reverses
the original elementary row operation.
Suppose that an n × n matrix B can be obtained from an n × n matrix A by a ﬁnite sequence of
elementary row operations. Then since these elementary row operations can be reversed, the matrix A
can be obtained from the matrix B by a ﬁnite sequence of elementary row operations.
Definition. An n × n matrix A is said to be row equivalent to an n × n matrix B if there exist a ﬁnite
number of elementary n × n matrices E1 , . . . , Ek such that B = Ek . . . E1 A.
−
−
Remark. Note that B = Ek . . . E1 A implies that A = E1 1 . . . Ek 1 B . It follows that if A is row
equivalent to B , then B is row equivalent to A. We usually say that A and B are row equivalent. The following result gives conditions equivalent to the invertibility of an n × n matrix A.
PROPOSITION 2N. Suppose that a11
.
.
A=
. ... a1n
.
.
,
. an1 ... ann and that x1
.
x= . . and xn 0
.
0= .
.
0 are n × 1 matrices, where x1 , . . . , xn are variables.
(a) Suppose that the matrix A is invertible. Then the system Ax = 0 of linear equations has only the
trivial solution.
(b) Suppose that the system Ax = 0 of linear equations has only the trivial solution. Then the matrices
A and In are row equivalent.
(c) Suppose that the matrices A and In are row equivalent. Then A is invertible.
Proof. (a) Suppose that x0 is a solution of the system Ax = 0. Then since A is invertible, we have
x0 = In x0 = (A−1 A)x0 = A−1 (Ax0 ) = A−1 0 = 0.
It follows that the trivial solution is the only solution.
(b) Note that if the system Ax = 0 of linear equations has only the trivial solution, then it can be
reduced by elementary row operations to the system
x1 = 0, ..., xn = 0. This is equivalent to saying that the array a11
.
.
. a1n
.
.
. 0
.
.
. an1
Chapter 2 : Matrices ...
... ann 0
page 15 of 19 c Linear Algebra W W L Chen, 1982, 2005 can be reduced by elementary row operations to the reduced row echelon form 1 ... 0 0
.
.
.
.
.
..
.
.
.
0 ... 1 0
Hence the matrices A and In are row equivalent.
(c) Suppose that the matrices A and In are row equivalent. Then there exist elementary n × n matrices
E1 , . . . , Ek such that In = Ek . . . E1 A. By Proposition 2M, the matrices E1 , . . . , Ek are all invertible, so
that
−
−
−
−
A = E1 1 . . . Ek 1 In = E1 1 . . . Ek 1 is a product of invertible matrices, and is therefore itself invertible. 2.7. Consequences of Invertibility
Suppose that the matrix a11
.
.
A=
. ... an1 a1n
.
.
. . . . ann is invertible. Consider the system Ax = b, where x1
.
.
x=
and
.
xn b1
.
b= .
.
bn are n × 1 matrices, where x1 , . . . , xn are variables and b1 , . . . , bn ∈ R are arbitrary. Since A is invertible,
let us consider x = A−1 b. Clearly
Ax = A(A−1 b) = (AA−1 )b = In b = b,
so that x = A−1 b is a solution of the system. On the other hand, let x0 be any solution of the system.
Then Ax0 = b, so that
x0 = In x0 = (A−1 A)x0 = A−1 (Ax0 ) = A−1 b.
It follows that the system has unique solution. We have proved the following important result.
PROPOSITION 2P. Suppose that a11
.
.
A=
. ... a1n
.
.
,
. an1 ... ann and that x1
.
x= .
.
xn and b1
.
b= .
.
bn are n × 1 matrices, where x1 , . . . , xn are variables and b1 , . . . , bn ∈ R are arbitrary. Suppose further
that the matrix A is invertible. Then the system Ax = b of linear equations has the unique solution
x = A−1 b.
Chapter 2 : Matrices page 16 of 19 c Linear Algebra W W L Chen, 1982, 2005 We next attempt to study the question in the opposite direction.
PROPOSITION 2Q. Suppose that a11
.
A= .
. ... a1n
.
.
,
. an1 ... ann and that x1
.
x= . . and b1
.
b= . . xn bn are n × 1 matrices, where x1 , . . . , xn are variables. Suppose further that for every b1 , . . . , bn ∈ R, the
system Ax = b of linear equations is soluble. Then the matrix A is invertible.
Proof. Suppose that 1
0
.
b1 = . ,
.
0 ..., 0
0
.
bn = . .
.
0 0 1 In other words, for every j = 1, . . . , n, bj is an n × 1 matrix with entry 1 on row j and entry 0 elsewhere.
Now let x11
.
x1 = . ,
.
xn1 x1n
.
xn = . .
xnn ..., denote respectively solutions of the systems of linear equations
Ax = b1 , ..., Ax = bn . It is easy to check that
A ( x1 . . . xn ) = ( b1 . . . bn ) ; in other words, x11
.
.
A
. ... x1n
.
.
= In ,
. xn1 ... xnn so that A is invertible.
We can now summarize Propositions 2N, 2P and 2Q as follows.
PROPOSITION 2R. In the notation of Proposition 2N, the following four statements are equivalent:
(a) The matrix A is invertible.
(b) The system Ax = 0 of linear equations has only the trivial solution.
(c) The matrices A and In are row equivalent.
(d) The system Ax = b of linear equations is soluble for every n × 1 matrix b.
Chapter 2 : Matrices page 17 of 19 c Linear Algebra W W L Chen, 1982, 2005 Problems for Chapter 2
1. Consider the four matrices 5
4,
1 2
A = 1
2 1
9 B= 7
2 2
7 9
1 , 1
2
C=
1
3 0
1
1
2 4
3
,
5
1 1
D = 2
1 0
1
3 7
2.
0 Calculate all possible products.
2. In each of the following cases, determine whether the products AB and BA are both deﬁned; if
so, determine also whether AB and BA have the same number of rows and the same number of
columns; if so, determine also whether AB = BA:
0
4 3
5 b) A = 1
3 −1 5
04 c) A = 2 −1
32 a) A = 2 −1
32 2
and B = 3
1 and B = and B = 1
12 1
6
5 −4
1 1 −4
2
0
5 and B = 0
−2 3
0 3
d) A = −2
1 0
5
0 0
0
−1 2 −5
, and ﬁnd α, β, γ ∈ R, not all zero, such that the matrix
31
2
αI + βA + γA is the zero matrix. 3. Evaluate A2 , where A = −4
. Show that A2 is the zero matrix.
−6
αβ
b) Find all 2 × 2 matrices B =
such that B 2 is the zero matrix.
γδ 4. a) Let A = 6
9 5. Prove that if A and B are matrices such that I − AB is invertible, then the inverse of I − BA is
given by the formula (I − BA)−1 = I + B (I − AB )−1 A.
[Hint: Write C = (I − AB )−1 . Then show that (I − BA)(I + BCA) = I .]
6. For each of the matrices below, use elementary row operations to ﬁnd its inverse, if the inverse
exists: 111
1 2 −2
a) 1 −1 1 b) 1 5 3 001
2 6 −1 152
234
c) 1 1 7 d) 3 4 2 0 −3 4
233 1 a b+c
e) 1 b a + c 1 c a+b
Chapter 2 : Matrices page 18 of 19 c Linear Algebra W W L Chen, 1982, 2005 7. a) Using elementary row operations, show that the inverse of 2
1 2
1 5
2
4
3 8
3
7
5 5
1 2
3 3 −2 −2 5 0 −2
1 −1 is 1 −5
−2 3 .
1
0
0 −1 b) Without performing any further elementary row operations, use part (a) to solve the system of
linear equations
2x1 + 5x2 + 8x3 + 5x4 = 0,
x1 + 2x2 + 3x3 + x4 = 1,
2x1 + 4x2 + 7x3 + 2x4 = 0,
x1 + 3x2 + 5x3 + 3x4 = 1.
8. Consider the matrix 1
1
A=
2
2 0
1
1
0 3
5
9
6 1
5
.
8
3 a) Use elementary row operations to ﬁnd the inverse of A.
b) Without performing any further elementary row operations, use your solution in part (a) to solve
the system of linear equations
x1
+ 3x3 + x4 = 1,
x1 + x2 + 5x3 + 5x4 = 0,
2x1 + x2 + 9x3 + 8x4 = 0,
2x1
+ 6x3 + 3x4 = 0. Chapter 2 : Matrices page 19 of 19 ...
View
Full
Document
This note was uploaded on 06/13/2009 for the course TAM 455 taught by Professor Petrina during the Fall '08 term at Cornell University (Engineering School).
 Fall '08
 PETRINA

Click to edit the document details