Linear Algebra with Applications (3rd Edition)

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: MA294: A Sample Correction Homework # 2 Acmae El Yacoubi September 5, 2007 ex. 2.2.2 Given the relation (2.2.3, p. 63), the matrix T is T = cos(60 ) - sin(60 ) sin(60 ) cos(60 ) = 1 2 3 2 - 1 2 3 2 ex. 2.2.10 The unit vector on the line L is u= 1 4 5 3 For any x R2 where x = [x1, x2], we have: projL (x) = (u.x)u = ( 4 x1 + 3 x2 ) 5 5 = = 4/5 3/5 0.64x1 + 0.48x2 0.48x1 + 0.36x2 0.64 0.48 x . 1 x2 0.48 0.36 This being true for all x, we nally obtain the matrix of the projection onto the line L A= 0.64 0.48 0.48 0.36 ex. 2.2.16 Let T be the reection about a line L in R2 where L is shown below 1 L a. b. Reect the red vector about the line L. By denition, the matrix of T is obtained by mapping the basis vectors, namely e1 and e2. That is, [T (e1) T (e2)]. Given the denition of the reection (def. 2.2.2, p 60), we have: T (x) = 2projL (x) - x = 2(u.x)u - x where u is the unit vector on L. For our case u= cos() sin() 1 Thus T (e1 ) = 2(u.e1 )u - e1 = = cos(2) sin(2) 2 cos2 () - 1 2 cos() sin() and T (e2 ) = 2(u.e2 )u - e2 = = sin(2) - cos(2) 2 cos() sin() 2 sin2 () - 1 Finally, the matrix T is T () = cos(2) sin(2) sin(2) - cos(2) ex. 2.2.22 Let e1 , e2 , e3 be the unit vectors on the x-axis, y-axis and z -axis, respectively. The counter-clockwise rotation about the y-axis, noted T , acts on the unit vectors as follows: T (e1 ) = cos()e1 - sin()e3 T (e2 ) = e2 T (e3 ) = sin()e1 + cos()e3 cos() 0 sin() 1 0 T = 0 - sin() 0 cos() Thus its matrix is ex. 2.2.42 T being the projection onto the line L. for any vector x we have T (x) = projL (x) = (u.x)u where u is the unit vector on L. Thus, since the vector u is on the line L, it is invarient by the projection T , i.e T (u) = u. We can show this result by applying T twice onto x as follows T (T (x)) = T ((u.x)u) = (u.x)u = T (x) where we used the linearity of T . This being true for all x R2 , we nally have T2 = T Note: We say that T is idempotent. ex. 2.3.14 The square matrix A is dened by 2 1 0 0 5 3 0 0 0 0 1 2 0 0 2 5 0 0 0 1 To check if it is invertible, we write the augmented matrix [A|I4 ] and we compute its rref as follows 2 1 0 0 5 3 0 0 0 0 1 2 0 0 2 5 | | | | 1 0 0 0 0 1 0 0 0 1 5 0 0 | 2 0 1 0 0 | 0 2 0 0 1 2 | 0 0 0 2 5 | 1 0 0 1 0 0 0 | 0 1 0 0 | 0 0 0 0 1 0 | 1 0 -2 1 0 0 0 1 | 0 0 1 0 -1 2 0 0 1 2 1 5 0 0 2 0 1 0 0 0 0 1 2 0 0 0 1 | 1 0 2 | -1 2 | 0 0 | 0 0 3 -5 0 0 -1 2 0 0 0 0 5 -2 0 0 -2 1 0 1 0 0 0 0 1 0 2 [I4 |A-1 ] Thus, A is invertible and its inverse is A-1 3 -5 0 0 -1 2 0 0 = 0 0 5 -2 0 0 -2 1 Note: For your information, A is a diagonal block-matrix. In this case, the inverse can easily be found by computing the respective inverse of the 2 2 sub-matrices, A1 and A2 : A-1 = A1 O2 O2 A2 1 2 2 5 0 0 0 0 where A1 = 2 5 1 3 A2 = O2 = ex. 2.3.34 Let A be the matrix dened by a 0 0 A = 0 b 0 0 0 c a. A is invertible i det(A) = 0. Since A is diagonal, it is easy to compute its determinant: it's the product of its diagonal entries. Thus det(A) = 0 abc = 0 (a = 0 and b = 0 and c = 0) If det(A) = 0, then its inverse is 1 A-1 = 0 0 a 0 0 1 b 0 0 1 c b. The above mentioned rule can be applied to diagonal matrices of any size n. That is, a diagonal matrix A = [aij ] of order n is invertible if and only if n aii = 0 i=1 ex. 2.3.40 Let A = [C1 , C2 , ..., Cn ] be a matrix of order n such that (i, j) , i = j : Ci = Cj i.e. A has two columns that are equal. Assume that T is the linear transformation associated with A and that A is represented w.r.t a basis e = {e1 , e2 , ..., en }. We know that each column Ck of A is the image a basis vector ek (see Fact 2.1.2, p 48) in the domain of T . Thus Ci = Cj T (ei ) = T (ej ) T (ei - ej ) = 0 u = 0, T (u) = 0 (u = ei - ej ) T (thus A) is non invertible Q.E.D ex. 2.3.42 Permutation matrices A permutation matrix has a '1' exactly once in each row and column. By swapping rows, we end up having the identity matrix with leading 1's in each row. Thus, a permutation matrix is invertible. 3 Let be a permutation of {1, 2, ..., n} and P the associated permutation matrix dened by -1 -1 P = [i(j) ]. Its inverse , P = P , is also a permutation matrix. -1 ( ) You can easily check that by computing the ith -row-j th -column of P P . ex. 2.3.52 Let b R4 such that Ax = b is inconsistent. The matrix representation of this linear system is: 0 0 0 1 1 2 3 4 2 4 6 8 | | | | b1 b2 b3 b4 | b4 | b1 | b2 - 2b1 | b3 - 3b1 We compute the rref of [A|b ]: 1 0 0 0 4 1 2 3 8 2 4 6 | | | | b4 1 4 0 1 b1 - 0 0 b2 0 0 b3 8 2 0 0 The third and last row implies that (b2 - 2b1 = 0 and b3 - 3b1 = 0). Thus, to have an inconsistent system, we only need to make sure that either one of these two conditions is not met. For example, we can choose: b1 = 1, b2 = 0 and set anything for b3 and b4 . Finally, an example of b could be 1 0 b= 0 1 ex. 2.4 16 - 26 16 True. 17 False. (A + B)2 = A2 + AB + BA + B 2 . The matrix product is not commutative. 18 True. 19 False. The inverse is not distributive. 20 False. (A - B)(A + B) = A2 + AB - BA + B 2 . The matrix product is not commutative. 21 True. 22 False. The matrix product is not commutative. 23 True. (ABA-1 )3 = ABA-1 ABA-1 ABA-1 = AB 3 A-1 . 24 True. 25 True. ex. 2.4.26 Block matrices Let us write 1 0 A= 0 0 1 3 B= 0 0 0 1 0 0 2 4 0 0 1 0 1 0 2 4 1 3 0 1 = A1 0 A3 1 3 5 = B1 2 B3 4 A2 A4 B2 B4 where A1 = A2 = A4 = I2 , A3 = B3 = O2 . To compute the product AB , we multiply the corresponding Ai and Bj 2 2 matrices as follows: A1 B1 + A2 B3 A1 B2 + A2 B4 AB = A3 B1 + A4 B3 A3 B2 + A4 B4 4 Plugging the expression of the products of the sub-matrices Ai Bj , we obtain 1 3 AB = 0 0 2 4 0 0 2 4 1 3 3 5 2 4 ex. 2.4.34 We have AB(AB)-1 = In = A.(B(AB)-1 ) = AC where C = B(AB)-1 A Thus, according to (fact 2.4.9, p 85): Likewise, we have and C are both invertible and A-1 = C = B(AB)-1 (AB)-1 AB = In = ((AB)-1 A).B = DB where D = (AB)-1 A Thus, according to (fact 2.4.9, p 85): B and D are both invertible and B -1 = D = (AB)-1 A ex. 2.4.40 We have B -1 = 1 2 3 5 and (AB)-1 = 1 3 2 5 and (see ex. 2.4.34) Thus, if we write (AB)-1 AB = In A= = a b c d B -1 = (AB)-1 A and plug it into the previous equation, we obtain B -1 = 1 2 3 5 = 1 3 a b . 2 5 c d = a + 3c b + 3d 2a + 5c 2b + 5d We have 2 2 linear systems to solve. We nally obtain a = 4, b = 5, c = -1. d = -1 A= 4 5 -1 -1 ex. 2.4.42 a. The angle between the lines P and Q is = 30 . Let uP and uQ Q, respectively. For a given vector x, we use the following notations (x, uP ) = and (ref P (x), T (x)) = be the unit vectors on P and Note that + = = 30 . Thus, the angle between x and T (x) is (x, T (x)) = = = i.e. (x, T (x)) = (x, ref P (x)) + (ref P (x), T (x)) 2 (x, uP ) + 2 (refP (x), uQ ) 2+2 60 Since the reection does not change the length, x and T (x) have the same length. b. From (a), we deduce that: 5 x and T (x) have the same length, and the angle between these two vectors is = 2 Thus, T is the rotation of angle = 2 (counter-clockwise). c. From (b) and (fact 2.2.3, p 63), we conclude that T = cos() - sin() sin() cos() = 1 2 3 2 - 1 2 3 2 ex. 2.4.50 a. Let A and E be two matrices dened by a b c R1 d e f = R2 E = A= R3 g h k The product EA is 1 0 0 -3 1 0 0 0 1 a b c R1 EA = d - 3a e - 3b f - 3c = R2 -3R1 R3 g h k 0 0 0 = 1 0 0 0 0 0 Note that E = I3 - 3.E21 where E21 E is an elementary matrix which adds (-3)-times the 1st row of A to its second row. i.e. R2 - R2 + (-3)R1 b. Let A be an arbitrary matrix dened by its rows and E be the matrix dened as follows 1 0 0 R1 0 1 0 R 2 E= A= 4 R3 0 0 1 The product EA is R1 EA = 1 R2 4 R3 Hence, EA is obtained from A by multiplying this latter's 2nd row, R2 , by a scalar (1/4). E is an elementary matrix. c. We want to swap the two last rows of a 3 3-matrix A. Consider the elementary matrix E dened by 1 0 0 E = 0 0 1 0 1 0 E is obtained from the identity matrix I3 check that 1 EA = 0 0 by swapping this latter's last two rows. We can easily 0 0 R1 R1 0 1 . R2 = R3 R3 R2 1 0 d. In (a), E is obtained from the identity matrix I3 by adding (-3) times this latter's rst row to its second row. In (b), E is obtained from the identity matrix I3 by multiplying this latter's second row by a scalar. In (c), E is obtained from the identity matrix I3 by swapping this latter's last two rows. 6 ...
View Full Document

This note was uploaded on 01/30/2008 for the course MATH 2940 taught by Professor Hui during the Fall '05 term at Cornell University (Engineering School).

Ask a homework question - tutors are online