This preview shows page 1. Sign up to view the full content.
Unformatted text preview: MAT223 Final
December 2004
Note: I’ll often write column vectors horziontally and with commas to save space.
Part I
1. (2). Solving for X , X = (AT)− 1A − 1 = (A AT) −1. Now, det A = 1, so det(A AT) = 1, which
ab
d −b
means that the inverse of A AT =
is just
, and we only need to compute
cd
−c a
1
a=[1 −1]
= 2.
−1
2. (4). In A we have the solutions of a homogeneous system of linear equations, so there we
do have a subspace. In B the system is not linear and though in C it is, it’s not homogeneous. 0 −1 1
3. (1). In the basis {1, x, x2}, the vectors in S are the rows of − 1 1 0 whose determi1 k1
nant is − (k + 2). Thus S is a basis unless k = − 2. (Alternatively, use rowreduction.)
4. (1). The given basis for W is orthogonal, so we can compute X · V2
1 1 1 1 1 X · V1
V1 +
V2 = − 1 + 1 = 0 .
projW X =
V2 2
20
V1 2
20
0
5. (5). X = V3 − V3 · V1
V1 2 V1 + V3 · V2
V2 2 1 2 1 V2 = [1, 1, 0, 1] − 3 [1, − 1, 0, 1] − 2 [1, 1, 0, 0] = 3 [ − 1, 1, 0, 2]. 6. (2). You get the elemntary matrix by applying the same row operation to the identity
matrix.
7. (6). By Cramer’s Rule, x2 = a − 2b c
d − 2e f
g − 2h k / abc
def
ghk =−2 abc
def
ghk / abc
def
ghk = − 2. 8. (5). A is false since the eigenvalues of A− 1 are λ− 1 where λ is an eigenvalue of A, so they
are usually diﬀerent from those of A. B is true: if A v = λ v , then A− 1v = λ− 1v , so (A− 1 +
1+λ
I )v = λ− 1v + v = λ v . C is false, not every invertible matrix is diagonalizable.
9. (5). Expanding by the second row, det A = − 2k
2 2 − 2k (k − 1) (k + 1) − 1 = 2k (k − 1)(k + 2). 1
k−1 k+1
k + 1 k2
1
0
k−1
0 = 2k (k − 1) 1
k+1
k+1
1 = 10. (4). A is true: if A B is invertible, B − 1 = (A B )− 1A exists. B is also true: if A B = −
(A B )T, taking determinants on both sides, det A det B = ( − 1)3 det A det B , and since
det A 0, it follows that det B = 0. C is false: if A B = B A, B could be the zero matrix,
for example.
11. (6). Since dim(im T ) + dim(ker T ) = dim(domain) = 7, option A is correct. B is not
because, the rank of A is dim(im T ), not 8 minus that. Finally, C is also correct because
rank A = dim(row A) = 7 − dim((row A)⊤).
12. (1). The third column of the standard matrix representation is just T ([0, 0, 1]) = 2[0, 0,
1] − N = [ − 1, 0, 1].
1 Part II
13.
a) If B is not invertible, det B = 0, that is, a d − b c = 0. Therefore,
B2 = a2 + a d a b + b d
a2 + b c a b + b d
ab
= (a + d)
= (a + d)B.
=
2
cd
a c + c d a d + d2
ac+cd bc+d b) Assume that t1 X1 + t2 X2 + t3 X3 = 0, we must prove t1 = t2 = t3 = 0. Well, multiplying by the matrix A we get t1 A X1 + t2 A X2 + t3 A X3 = 0. Since {A X1, A X2,
A X3} is assumed to be independent, we get t1 = t2 = t3 = 0 as desired.
The converse is false in general, for example if A is the zero matrix, clearly {A X1,
A X2, A X3} will de linearly dependent no matter what.
14.
a) Note that W ⊥ is given as a solution set of a system of homogeneous linear equations, thus we can write it as the null space of the matrix of coeﬃcients of said 1 0 −1 0 0 1 1 0 0 −1 system: W ⊥ = null A where A = 0 1 1 0 − 1 . So we want a basis for W =
0 0 0 1 −1
⊥⊥
⊥
(W ) = (null A) . I claim that (null A)⊥ = row A. Consider an arbitrary X ∈
null A. To calculate the product A X , we take the dot product of each row of A
with the column vector X , and since X ∈ null A, all of these products must be
zero. This means that each row of A belongs to (null A)⊥, and so, row A ⊆
(null A)⊥. To prove those subspaces are equal it is enough to show they have the
same dimension, but dim (null A)⊥ = 5 − dim null A = rank A = dim row A.
So, we have W = row A. To ﬁnd a basis for W , we simply use row reduction: 1
1 0
0 0 −1
10
11
00 0
0
0
1 0
1 −1 R2 − R1 0
0
−1 −1
0 0 −1
11
11
00 0
0
0
1 0
1 −1 R3 − R2 0
0
−1 −1
0 0 −1
11
00
00 0
0
0
1 0
−1 ,
0
−1 so we see that {[1, 0, − 1, 0, 0], [0, 1, 1, 0, − 1], [0, 0, 0, 1, − 1]} is a basis for W .
b) Let X ∈ W ⊤, we must show X ∈ U ⊤. For this we have to show that X · Y = 0 for
any Y ∈ U . But any Y ∈ U is also in W (since U ⊆ W ), and since X ∈ W ⊤, we do
have X · Y = 0.
15.
a) We need eigenvectors corresponding to the eigenvalues 2 and − 1. First, let’s do
the eigenvalue − 1: we need to solve (A − ( − 1)I )X = 0, that is, (A + I )X = 0. The
three equations in the system are the same, namely, x + y + z = 0. The general
solution, of course, is given by x = t1, y = t2, z = − t1 − t2. We ﬁnd a basis of the
solution space by setting t1 = 1, t2 = 0 to get [1, 0, − 1] and then t1 = 0, t2 = 1 to get
[0, 1, − 1].
No for the eigenvalue 2: we solve the system (A − 2I )X = 0 by row reduction: 1 −2 1
1 − 2 1 R3 + R2 1 − 2 1
−2 1 1 1 − 2 1 R1 ↔ R2 − 2 1 1 R2 + 2R1 0 − 3 3 1 0 1 − 1 R3 − R1
1 1 −2
0 3 − 3 − 3 R2 0 0 0
1 1 −2 2 which means that the solution space is given by z = t, y = t, x = t, so [1, 1, 1] is a
basis for it. 1 01
Therefore, writing our eigenvectors as columns, P = 0 1 1 , and we’ll have
−1 −1 1
−1 0 0
P − 1A P = 0 − 1 0 . A is diagonalizable because we found a basis of eigenvec0 02
tors: the columns of P . a00
b) Let P be such that P − 1A P is a diagonal matrix, say 0 b 0 . Then we have
00c
3 a3 0 0
a00 0 b3 0 = 0 b 0 = (P − 1A P )3 = P − 1A3P = 0,
00c
0 0 c3
so a3 = b3 = c3 = 0. But this implies that a = b = c = 0, and so P − 1A P = 0. Solving
for A, A = P 0 P − 1 = 0.
16.
a) Clearly both A and B have two dimensional columns spaces. By either row reduction, or a little trial and error, we ﬁnd that the columns of B are linear combinations of the columns of A: B1 = 2A1 − A2 and B2 = A1 + A2. This tells us that
col B ⊆ col A. Since both spaces have dimension two, they must be equal.
b) Let X ∈ null A ∩ col A. Since X ∈ col A = im A, we know that X = A Y for some
vector Y . We also have X ∈ null A, so that A X = 0, i.e., A2Y = 0, so that Y ∈
null A2 = null A. Therefore, X = A Y = 0, as desired. 3 ...
View
Full
Document
This note was uploaded on 01/19/2010 for the course MAT MAT223 taught by Professor Uppal during the Spring '09 term at University of Toronto Toronto.
 Spring '09
 UPPAL
 Linear Algebra, Algebra, Vectors

Click to edit the document details