This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Math 416  Abstract Linear Algebra
Fall 2011, section E1
Practice midterm 2 Name: Solutions • This is a (long) practice exam. The real exam will consist of 4 problems.
• In the real exam, no calculators, electronic devices, books, or notes may be used.
• Show your work. No credit for answers without justiﬁcation.
• Good luck!
1. /15 2. /10 3. /10 4. /10 5. /5 6. /10 7. /15 8. /10 9. /10 10. /10 11. /10 12. /5 Total: /120 1 Section 2.5
Problem 1. Let A be an m × n matrix.
a. (5 pts) Show that A has linearly independent columns if and only if A : Rn → Rm preserves
linear independence, in the following sense: For any collection of vectors v1 , . . . , vk ∈ Rn we
have
{v1 , . . . , vk } is linearly independent ⇒ {Av1 , . . . , Avk } is linearly independent.
(⇒) Assume A has linearly independent columns; we want to show A preserves linear independence.
Let {v1 , . . . , vk } be a linearly independent collection in Rn . We want to show {Av1 , . . . , Avk }
is linearly independent. The equation
c1 Av1 + . . . + ck Avk = 0 ∈ Rm
A(c1 v1 + . . . + ck vk ) = 0 ∈ Rm
⇒c1 v1 + . . . + ck vk = 0 ∈ Rn since A has linearly independent columns
⇒c1 = . . . = ck = 0 since {v1 , . . . , vk } is linearly independent
has the unique solution c1 = . . . = ck = 0, i.e. {Av1 , . . . , Avk } is linearly independent.
(⇐) Assume A preserves linear independence. In particular, A preserves the linear independence of the standard basis vectors e1 , . . . , en of Rn , so that the columns of A, that is
a1 = Ae1 , . . . , an = Aen are linearly independent. 2 b. (5 pts) Show that A : Rn → Rm preserves linear independence if and only if for every
subspace S ⊆ Rn we have dim AS = dim S .
A preserves linear independence means
{v1 , . . . , vk } is linearly independent ⇒ {Av1 , . . . , Avk } is linearly independent
which is equivalent to
dim Span{v1 , . . . , vk } = k ⇒ dim Span{Av1 , . . . , Avk } = k.
Noting Span{Av1 , . . . , Avk } = A Span{v1 , . . . , vk } and the fact that any subspace S ⊆ Rn is
the span of some vectors, the above condition is equivalent to
S ⊆ Rn is a k dimensional subspace ⇒ dim AS = k = dim S
that is, for every subspace S ⊆ Rn we have dim AS = dim S . 3 Rm
Rp
B Rn A Figure 1: Composite AB . c. (5 pts) Assume A has linearly independent columns. Let B be an n × p matrix. Show
rank AB = rank B . (See ﬁgure 1.)
rank AB = dim im(AB )
= dim AB (Rp )
= dim A(B Rp )
= dim B Rp since A has linearly independent columns
= dim im B
= rank B. 4 Section 2.6
Problem 2. (10 pts) Show that the general solution of the system 3 −1 −1 9 1
1
6
1 9
A b = 2
−3 2
5 −12 4 −11
−8
−1
0
1
4 is { + s + t 3  s, t ∈ R}.
2
1
4
3
1 −11
−8
−1
1 0 4 It suﬃces to check (1) is a solution of Ax = b and (2) Null A = Span{ , 2 3 }.
1
4
3
1
(1) We compute −1
1
3 −1 −1 9 2 4 = 9 = b
1
6
1
1
4
−3 2
5 −12
1 −1
4
so that is indeed a solution of Ax = b.
1
1 (2) We compute 3 −1 −1
2
1
6
−3 2
5 3 −1 −1
2
1
6
−3 2
5 −8
0
9 1 = 0
1
2
0
−12
3 −11
0
9 0 = 0
1
3
0
−12
4 −11
−8
1 0 which gives the inclusion Span{ , 2 3 } ⊆ Null A.
4
3 However, A has rank at least 2 (the ﬁrst two columns are linearly independent) so A has nullity
at most 4 − 2 = 2. Therefore A has nullity exactly 2 and the inclusion is an equality −11
−8
1 0 Span{ , 2 3 } = Null A.
4
3 5 Section 2.7
1 −3 2
. Find a basis of each of the four fundamental
3 −9 6
subspaces of A, that is Col A, Null A, Row A, Null(AT ).
Problem 3. (10 pts) Let A = A= 1 −3 2
1 −3 2
∼
3 −9 6
000 1
}.
3 3
−2
1 , 0 }.
The free variables are x2 , x3 . The corresponding basis of Null A is {
0
1 1
−3}.
A basis of Row A is given by the only pivot row in the echelon form: {
2 Because the only pivot is in column 1, Col A has a basis {a1 } = { 1
3
13
AT = −3 −9 ∼ 0 0
2
6
00 The only free variable is x2 . The corresponding basis of Null AT is { 6 −3
}.
1 Problem 4. Previously, we have shown that row operations preserve the null space and do
not preserve the column space (in general).
a. (5 pts) Do row operations preserve the row space? Prove your answer.
Yes they do. If we have row equivalent matrices A ∼ B , then the rows of B are linear combinations of the rows of A which implies Row B ⊆ Row A.
Moreover, row operations are invertible (by other row operations), which means the rows of A
are linear combinations of the rows of B , giving the inclusion Row A ⊆ Row B and therefore
equality Row A = Row B . b. (5 pts) Do row operations preserve the left null space? Prove your answer.
No they don’t. Take for example the row equivalent matrices
A=
We have
Null AT = Span{ 1
1
∼
= B.
1
2 −1
−2
} = Span{
} = Null B T .
1
1 7 Problem 5. (5 pts) Let A, B be m × n matrices satisfying Null A = Null B and Col A =
Col B . Can we conclude A = B ? Prove your answer.
No we can’t. Consider for example the 1 × 1 matrices A = 1 and B = 2 . They satisfy
Null A = Null B = {0} and Col A = Col B = R, and A = B .
More generally, take any ndimensional subspace S ⊆ Rm , assuming n ≤ m. For any basis
{a1 , . . . , an } of S , the m × n matrix A = a1 . . . an satisﬁes Null A = {0} and Col A = S .
However, there are many diﬀerent bases of S , yielding many diﬀerent matrices with the same
null space {0} and same column space S . 8 Section 2.8
Problem 6. Consider the bases {u1 = −3
2
3
1
} of R2 .
, v2 =
} and {v1 =
, u2 =
1
1
2
1 a. (8 pts) Find the transition matrix from the basis {u1 , u2 } to the basis {v1 , v2 }.
The transition matrix is
V −1 2 −3
U=
11 −1 = 113
5 −1 2 = 13
12 149
.
511 13
12 b. (2 pts) Find the coordinates of 5u1 − 2u2 in the basis {v1 , v2 }.
Using the transition matrix, the coordinates with respect to {v1 , v2 } are
149
511 12
5
=
.
−2
53 9 Problem 7. Consider the “integration” map
T : P1 → P2
x p(x) → (T p)(x) := p(t) dt
0 x
0 For example, the polynomial 6x is sent to 6t dt = [3t2 ]x = 3x2 . Note that T is linear.
0 a. (5 pts) Find the matrix representing T in the monomial bases {1, x} of P1 and {1, x, x2 }
of P2 .
T (1) = x
x2
2 00
= 1 0.
1
02 T (x) = The matrix representing T is [T ]{1,x,x2}{1,x} 10 b. (8 pts) Find the matrix representing T in the bases {2x + 1, x − 4} of P1 and
{1, x − 1, (x − 1)2 } of P2 .
We compute this matrix representation using that of part (a) along with transition matrices:
[T ]{1,x−1,(x−1)2 }{2x+1,x−4} = [id]{1,x−1,(x−1)2 }{1,x,x2 } [T ]{1,x,x2}{1,x} [id]{1,x}{2x+1,x−4} −1 00
1 −1 1
1 −4
= 0 1 −2 1 0 21
01
00
1
2 −1 00
1 −1 1
= 0 1 −2 1 −4 .
11
00
1
2
Let us compute the 1 −1
0 1
00 inverse: 1 100
1 −1 0 1 0 −1
100111
−2 0 1 0 ∼ 0 1 0 0 1 2 ∼ 0 1 0 0 1 2
1 001
0 0 100 1
001001 so that the matrix we are looking for is 1
= 0
0 2
= 3
1 11 0 0
1 2 1 −4
1
01 1 2 7
−2
−3 1
2 c. (2 pts) Find the coordinates of T (5(2x + 1) + 3(x − 4)) in the basis {1, x − 1, (x − 1)2 }
of P2 .
Using part (b), the coordinates are 1 7
−2
2 −2
3 −3 5 = 6 .
3
1
13
12
2 11 Problem 8. (10 pts) Let A be a 2 × 2 matrix which has an eigenvalue λ1 = 1 with
5
and an eigenvalue λ2 = 4 with corresponding eigenvector
corresponding eigenvector v1 =
1
2
. Find A.
v2 =
1
We know Av1 = λ1 v1 = v1 and Av2 = λ2 v2 = 4v2 , which can be written as
AV = A v1 v2
= Av1 Av2
= v1 4v2
=
⇒A= 58
14
5 8 −1
V
14
52
11 −1 = 58
14 = 5 8 1 1 −2
1 4 3 −1 5 = 1 −3 30
3 −3 18 = −1 10
.
−1 6 Remark: Another (conceptually cleaner) explanation is that the transformation A is repre10
in the basis {v1 , v2 }. By the change of basis formula, we obtain
sented by the matrix
04
the standard matrix representation
A=V 1 0 −1
V.
04 12 Chapter 3
Problem 9. (10 pts) Compute the determinant
0
0
23 11
2 −15
0
a 05
3 −7
.
08
0 −5 The answer may depend on the parameter a ∈ R.
0
0
05
23 11
3 −7
=
2 −15
08
0
a
0 −5 0
0
23 11
2 −15
0
a
5
0
=−
0
0 0
0
0
a 0
3
0
0 50
0
03
0
=
00
2
00
0 0
0
0
a 00
5
00
0
=
2 −15
0
0a
0 0
3
0
0 0
0
=−
2
0 5
0
0
0 13 0
3
0
0 0
0
2
0 0
3
0
0 00
5
00
0
=
20
0
0a
0 0
3
0
0 5
0
0
0 0
0
= −(5)(3)(2)(a) = −30a.
0
a Section 4.1
Problem 10. (10 pts) Find the eigenvalues and corresponding eigenvectors of the matrix
7 −2
.
16 −5 A=
The characteristic polynomial is
det(A − λI ) = 7−λ
−2
16 −5 − λ = (7 − λ)(−5 − λ) + 32
= −35 − 2λ + λ2 + 32
= λ2 − 2 λ − 3
= (λ − 3)(λ + 1)
so the eigenvalues are λ1 = 3, λ2 = −1.
Let us ﬁnd the eigenvectors.
λ1 = 3 : A − 3 I = λ2 = −1 : A + I = Take v1 = Take v2 = 2 −1
4 −2
∼
00
16 −8 4 −1
8 −2
∼
00
16 −4 1
.
2 1
.
4 14 Problem 11. (10 pts) (#4.1.6) A linear map A : Rn → Rn is called nilpotent if Ak = 0
for some k ≥ 1. Show that if A is nilpotent, then 0 is the only eigenvalue of A.
If λ = 0 were an eigenvalue of A, then we would have Av = λv for some v = 0 and therefore
Ak v = λk v = 0 for all k ≥ 0. That is, A could not be nilpotent.
Moreover, Ak = 0 implies that A has a nontrivial kernel (because if A were injective, then so
would its powers Ak ). Therefore 0 is an eigenvalue of A. 15 Problem 12. (5 pts) Let A be a 2 × 2 matrix with the eigenvalue 7, of geometric multiplicty
2. Find A.
That the eigenvalue 7 has geometric multiplicty 2 means the eigenspace Null(A − 7I ) has
dimension 2. However, that eigenspace is a subspace of R2 , therefore it is all of R2 , i.e. all
vectors of R2 are eigenvectors of A with eigenvalue 7. In particular, we have Ae1 = 7e1 and
Ae2 = 7e2 , which gives
70
= 7I.
A=
07 16 ...
View
Full
Document
This note was uploaded on 11/07/2011 for the course MATH 416 taught by Professor Staff during the Fall '08 term at University of Illinois, Urbana Champaign.
 Fall '08
 Staff
 Math, Linear Algebra, Algebra

Click to edit the document details