This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Problem Set 2  Solutions
J. Scholtz October 15, 2006 Question 1: 1.5/4
In order to show that the set {e1 , . . . , en } is linearly independent we need to show that the following equation is satisﬁed only if c1 = . . . = cn = 0: c1 e1 + c2 e2 + . . . + cn en = 0 But since e1 = (1, 0, . . . , 0) and so on, then the LHS is equal to (c1 , c2 , . . . , cn ). Hence the last equation reads: (c1 , c2 , . . . , cn ) = (0, 0, . . . , 0) Which can only be satisﬁed if all ci s are zero. Question 2: 1.5/13
a) Two vectors Claim: If u and v are linearly independent then so are u + v and u − v . Proof: The equation a(u + v ) + b(u − v ) = 0 expands as (a + b)u + (a − b)v = 0, but since u, v are linearly independent by our assumption, then a + b = 0 and a − b = 0, as F is not of charcteristic 2 then a = −b and a = b imply a = b = 0 and so our vectors are linearly independent. Claim: If u + v and u − v are linearly independent then so are u and u. Proof: If u + v and u − v are linearly independent then so are x = u+v and 2 y = u−v . (Dividing by 2 is posssible since Char(F ) = 2). Then observe that 2 u = x + y and v = x − y and use the previous part. b) Three vectors This is almost the same as the previous part. Please see me if you need more explanation. 1 Question 3: 1.5/17
An upper triangular matrix is of form: c11 c21 .. 0 . A= . . . 0 0 0 ... c n1 . . . c n2 . .. . . . 0 cnn Then the collumns form vectors of form vi = (0, . . . , 0, kii , . . . , kin ) where the ﬁrst i − 1 elements are zeroes. This means that their linear combination gives: a1 v1 + . . . + an vn = (a1 k11 , a1 k12 + a2 k22 , . . . , aj kjn ) = 0 But k11 = 0 since the diagonal entries are nonzero, hence a1 = 0. Then the second entry reads a2 k22 = 0, and agains since k22 = 0,then a2 = 0. This process can be repeated ﬁnitely many times (ntimes) to show that a1 = . . . = an = 0 and hence the set is linearly independent. Question 4: 1.6/3(de)
part d) We can check, the only solution to: a(−1 + 2x + 4x2 ) + b(3 − 4x − 10x2 ) + c(−2 − 5x − 6x2 ) = 0 is a = b = c = 0. Hence these three polynomials are linearly independent. And since we know that dim(P2 (R)) = 3. Then we know these three polynomials form a basis. part e) We can see that: −7(1 + 2x − x2 ) + 2(4 − 2x + x2 ) + (−1 + 18x − 9x2 ) = 0 Therefore this set is not linearly independent and so it cannot be a basis. Question 5: 1.6/14
a) Case 1: Let W1 = {(a1 , . . . , a5 ) ∈ F 5 , a1 − a3 − a4 = 0}. The my claim is that the set {(1, 0, 0, 1, 0), (0, 1, 0, 0, 0), (0, 0, 1, −1, 0), (0, 0, 0, 0, 1)} is a basis. Checking span: a1 (1, 0, 0, 1, 0)+a2 (0, 1, 0, 0, 0)+a3 (0, 0, 1, −1, 0)+a5 (0, 0, 0, 0, 1) = (a1 , a2 , a3 , a1 −a3 , a5 ) 2 but since a4 = a1 − a3 this is a general vector in W1 . Checking Linear independence: a1 (1, 0, 0, 1, 0)+a2 (0, 1, 0, 0, 0)+a3 (0, 0, 1, −1, 0)+a5 (0, 0, 0, 0, 1) = (a1 , a2 , a3 , a1 −a3 , a5 ) so if this is to be equal to zero then a1 = a2 = a3 = a5 = 0  the set is linearly independent. Therefore W1 has dimension 4. b) Case 2: Let W2 = {(a1 , . . . , a5 ) ∈ F 5 , a2 = a3 = a4 and a1 + a5 = 0}. The my claim is that the set {(0, 1, 1, 1, 0), (1, 0, 0, 0, −1)} is a basis. One needs to check that this set spans W2 and is linearly independent: but the procedure is the same as in a) so I’ll skip it (you should not in your problem set). Hence W2 is twodimensional. Note: Observe that a 5dimensional space in both cases gets reducet to a 4 and 2dimensional subspace by imposing conditions. In the ﬁrst case there was only 1 condition and hence 4 = 5 − 1 and in the second case there were three conditions and 2 = 5 − 3, such a reasoning can help you to build your intuition about the dimensionality of resultinh spaces but is unfortunately not rigorous enough to be used as a proof. Question 6: 1.6/17
A general skew symmetric matrix satisﬁes the equation AT = −A, which implies that aij = −aji . Therefore it takes a form of: −c21 A= . . . −cn1 0 c21 .. . −c32 . . . ... c32 .. . −cn,n−1 cn1 cn2 . . . 0 Hence it seems the basis should be all the matrices such that Aij is a mtrix with aij = −aji = 1, i = j and all other entries equal to zero. By counting the number of entries we can see that there are n2 entries, then take away n for the diagonal entries and divide by 2 due to the symmetry. Hence there are − N = n(n2 1) of these matrices. However, we need to prove they really form a basis. Let’s start with span: 0 c21 ... cn 1 .. −c21 . c32 cn2 cij Aij = . . .. . . . . −c32 . i<j . . −cn1 . −cn,n−1 0 3 Hence they span. Checkign linear independence: 0 c21 ... .. −c21 . c32 cij Aij = . .. . . −c32 i<j . . . −cn1 . −cn,n−1 cn 1 cn2 . . . 0 if this is to be equal to the zero matrix then clearly cij = 0 for all i, j . Hence this is a bases and the skew symmetric n × n matrices form a subspace of dimension n(n−1) . 2 Question 7: 1.6/22
It turns out that both the suﬃcient and necessary condition is that W1 ⊂ W2 . Proof: Observe that W1 ⊂ W2 ⇐⇒ W1 = (W1 ∩ W2 ). Then the forward implication W1 ⊂ W2 =⇒ dim(W1 ) = dim(W1 ∩ W2 ) becomes a trivial statement. On the other hand we can consider the basis for the subspace W1 ∩ W2 and we call it β∩ . The we can extend this basis to a basis if W1 and call it β1 . Then the statement: dim(W1 ) = dim(W1 ∩ W2 ) is equivalent to the statement β1  = βcap , therefore the extension was not necessary and hence β∩ = β1 which means that W1 = W1 ∩ W2 . Question 8: 1.6/26
Claim: The basis for this subspace is a set of polynomials {x−a, x2 −a2 , . . . , xn − an }, and hence the dimension of this subspace is n. Proof: First we will show linear independence: c1 (x − a) + c2 (x2 − a2 ) + . . . + cn (xn − an ) = ci xi − ci ai Now this polynomial has to a zero polynomial for x, in particular x = 0, be all substituting this in we get ai = 0 then we get ci ci xi = 0 which can be i−1 factored into x and we get x( ci x ) = 0, which means that cn−1 = 0 and we can repeat this process to get c1 = c2 = . . . = cn = 0. Then we can check for span, let a general polynomial on our subspace be of form f (x) = c0 + c1 x + . . cn xn = .+ ci xi , then I claim that c0 = − ci ai . This because f (a) − c0 = ci ai = −c0 , hence the equality. But any linear combination of our basis will have a form: ci (xi − ai ) = ci xi − ci ai Hence our basis spans the space. Question 9: 1.6/31
part a) We know that W1 ∩ W2 forms a subspace and hence has a basis β∩ . Since W1 ∩ W2 ⊂ W1 then this basis can be extended onto a basis of W1 . Hence β1  ≥ β∩ . Therefore dim(W1 ) ≥ dim(W1 ∩ W2 ). 4 part b) Claim:Let the bases for W1 and W2 be β1 and β2 respectively. Then the β1 ∪ β2 contains a basis for W1 + W2 . Proof: We just need to show that β1 ∪ β2 spans W1 + W2 . But since by deﬁnition W1 + W2 = {u + v, u ∈ W1 v ∈ W2 }, then the span of β1 ∪ β2 contains all the vectors of form u + v where u ∈ W1 and v ∈ W2 . Hence it spans and therefore the basis of W1 + W2 called β+ ⊂ β1 ∪ β2 . Therefore β+  ≤ β1 ∪ β2  ≤ β1  + β2 , hence dim(W1 + W2 ) ≤ dim(W1 ) + dim(W2 ). Question 10: 1.6/32
a) Example 1 Consider W1 = span((1, 0, 0)), W2 = span((1, 0, 0), (0, 1, 0)). Then (W1 ∩ W2 ) = W1 and the equality is trivially satisﬁed. b) Example 2 Consider W1 = span((1, 0, 0), (0, 1, 0)), W2 = span((0, 0, 1)). Then W1 + W2 = R3 and from the bases dim(W1 ) = 2, whereas dim(W2 ) = 2, which means the equality is satisﬁed. Question 11: 1.7/3
Suppose the set of real numbers is a vector space over rational numbers with a ﬁnite dimension n. Than any set of linearly independent vectors will have atmost n vectors. Consider the set of vectors {1, π , π 2 , . . . , π n }. Then the span of this set is ai π i , however, this expression cannot be equal to 0 with at least one of ai ’s nonzero, because if it would then there would exist a polynomial of form f (x) = ai xi that has π as a solution, since π is transcendental. Hence, for any n ∈ N there exists a set of n + 1 linearly independent vectors in R and so it cannot be ﬁnite dimensional. 5 ...
View Full
Document
 Spring '08
 GUREVITCH
 Math

Click to edit the document details