This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: SECTION 3.8 Matrices In addition to routine exercises with matrix calculations, there are several exercises here asking for proofs
of various properties of matrix operations. In most cases the proofs follow immediately from the deﬁnitions
of the matrix operations and properties of operations on the set from which the entries in the matrices are
drawn. Also, the important notion of the (multiplicative) inverse ofa matrix is examined in Exercises 18—21.
Keep in mind that some matrix operations are performed “entrywise,” whereas others operate on whole rows
or columns at a time. The general problem of efﬁcient calculation of multiple matrix products, suggested by
Exercises 23*25, is interesting and nontrivial. Exercise 31 foreshadows material in Section 8.4. 1. 3) Since A has 3 rows and 4 columns, its size is 3 x 4. 1
b) The third column of A is the 3 x 1 matrix 4
3 c) The second row of A is the l X 4 matrix [2 0 4 6]. d) This is the element in the third row, second column, namely 1. 1 2 1 . . . . 1 0 1 e) The transpose of A IS the 4 >< 3 matrix 1 4 3
3 6 T 3. a) We use the deﬁnition of matrix multiplication to obtain the four entries in the product AB. The (1, 1)“
entry is the sum nub“ + cub21 = 2 v 0 +11 : 1. Similarly. the (1,2)” entry is the sum 0.1le + 0121;22 2
2 >4 +13 = 11; (2,1)” entry is the sum oglbu + (122(321 : 3 i 0 + 2  1 = 2', and (2,2)‘h entry is the sum ambm + (1221)” Z 3 . 4 + 2 .3 :18. Therefore the answer is  b) The calculation is similar. Again, to get the (i,j)"‘ entry of the product, we need to add up all the
products mkbkj. You can visualize “lifting” the 21th row from the ﬁrst factor (A) and placing it on top of the
3"“ column from the second factor (B), multiplying the pairs of numbers that lie on top of each other, and taking the sum. Here we have 1 ‘1 [3 _9 4] 1'3+(—1)1 1'(—2)+(1)0 1(—1)+(—1).2 U l 1 0 2 = 03+11 0v(—2)+10 0(—1)+12
2 3 2.3+31 2(~2)+30 2(—1)+32
2 —2 —3
: 1 0 :2 9—44 Section 3.8 5. 11. 13. Matrices 109 c) The calculation is similar to the previous parts: 4 *3
3 —1 —1 3 2 —2
0 —2 0 71 4 —3
—'1 5
4.(—1)+(—3)0 43+(—3)(1) 42+(73)4 4(—2)+(g3)(73)
_ 3(71)+(si).0 3.3+(—i).(—1) 3.2+(~1)‘4 3(—2)+(s1)(k3)
* 0(—1)+(—2)0 03+(—2)(—1) 02+(72)4 0.(—2)+(s2)(73)
(—1)(—1)+50 (—1)3+5(—1) (—1)2+54 (#1)(72)+5(—3)
~4 15 *4 1
i —3 10 2 —3
_ 0 ‘2 ——8 6
1 —8 18 ‘13 First we need to observe that A = [avg] must be a ‘2 X 2 matrix; it must have two rows since the matrix it
is being multiplied by on the left has two columns, and it must have two columns since the answer obtained
has two columns. If we write out what the matrix multiplication means, then we obtain the following system
of linear equations: 2611+ 3021: 3 2012 + 3022 : 0 161.11 +4a21 =1 1012 + 4022 I 2
Solving these equations by elimination of variables (or other means;it’s really two systems of two equations
each in two unknowns), we obtain on = 9/5, 012 = —6/5. £1.21 = —1/5, (1.22 : 4/5. As a check we compute that, indeed,
2 3 9/5 _ 3 U
1 4 —1/5 _ 1 2 ' Since the (i,j)'h entry of 0 +A is the sum of the (i,j)‘h entry of 0 (namely 0) and the (i,j)“‘ entry of A,
this entry is the same as the (i,j)‘h entry of A. Therefore by the deﬁnition of matrix equality. O + A 2 A. —6/5
4/5 l A similar argument shows that A + 0 = A. . \Ve simply look at the (*i,j)”' entries of each side. The (i.j)“‘ entry of the lefthand side is alj + (I),j + cm). The (i,j)‘h entry of the righthand side is (aij + biJ) +cgj. By the associativity law for real number addition,
these are equal. The conclusion follows. In order for AB to be deﬁned, the number of columns of A must equal the number of rows of B. In order for
EA to be deﬁned, the number of columns of B must equal the number of rows of A. Thus for some positive
integers m and n, it must be the case that A is an m X n. matrix and B is an n x m matrix. Another way
to say this is to say that. A must have the same size as B‘ (and /or vice versa). Let us begin with the lefthand side and ﬁnd its {i,j)”‘ entry. First we need to ﬁnd the entries of BC. By A
deﬁnition, the (q,j)“h entry of BC is Z bqurj. (See Section 2.4 for the meaning of summation notation. 1'21
This symbolism is a shorthand way of writing bqlclj + liqgcQJ +    + bqtckj Therefore the (i,j)lh entry of
[J Fr.
A(BC) is Z aiq( Z bqrcrj). By distributing multiplication over addition (for real numbers), we can move
(1:1 1':1 p F; the term (liq inside the inner summation, to obtain 2 Z aiqbqrcrj. (We are also implicitly using associativity
(1:1 r=1 of i'nultiplication of real numbers here, to avoid putting parentheses in the product aiqbqrcrj 110 17. 19. 21. 23. Chapter 3 The Fundamentals: Algorithms, the Integers, and Matrices
; I . . . 1 a . . A: I)
A similar analysis with the righthand side shows that the (:13)th entry there is equal to Z ( Z U,(‘,l)q,)C,J
r:1 q:l
Fr P
: Z L embqrcm. Now by the commutativity of addition, the order of summation (whether we sum over
r':l (7:1
1' first and then (1., or over q lirst and then 7') does not matter. so these two expressions are equal. and the proof is complete. . Let us begin by computing A" for the ﬁrst few values of n. l l  l. 2 a 1 3 1 4 . l :5 I _ 2 __ 3 ¥ (1 _ a _ A Ala l A ~lo ii: A la ll: A Ala ii A —h ll It seems clear from this pattern. then. that A” 2 If] . (A proof of this fact could be given using mathematical induction. discussed in Section 4.1.) a) The (11‘th entry of (A + B)' is the (‘1').7')‘h entry of A + B, namely on + bﬁ. On the other hand, the
(1',j)”‘ entry of Ar + B" is the sum of the (i,j)”‘ entries of Af’ and B‘, which are the ii”
and B, again oi; + bji. Hence (A + B)" = A" + B‘. b) The (i.j)”" entry of (AB? is the (j, ﬁlth entry of AB, namely Z 0.1kb“. (See Section “2.4 for the meaning k=l
of summation notation. This symbolism a shorthand way of writing he“th + aﬂbg; + o  + anbng On entries of A H the other hand, the (ixjifh entry of B"A" is Z insair; (since the (iJCW' entry of B‘ is hi“ and the (k,j)”'
L':l entry of A! is (ljg; By the comumtativity of multiplication of real numbers, these two values are the same, so the matrices are equal. All we have to do is form the products AA‘1 and A‘lA, using the purported A“. and see that both of
them are the ‘2 >< 2 identity matrix. It. is easy to see that the upper left and lower right entries in each case
are (ac! — bc)/(ed — be) : l. and the upper right and lower left entries are all 0. We must show that AHA—1)" : I, where I is the a x n; identity matrix. Since matrix multiplication is
associative, we can write this product as A”((A‘1)")= A(A...(A(AA‘1)A‘1)...A")A‘1 .
By dropping each AA‘1 : I from the center as it is obtained, this product reduces to I.
(UK—1)") A'" : I. Therefore by definition (A’U‘l (A4)".
ical induction; see Section 4.1.) Similarly (A more formal proof requires mathemat— In order to compute the {'i,j)"l1 entry of the product AB, we need to compute the product air—bi.) for each la: from 1 to ma requiring m2 multiplications. Since there are innn3 such pairs (Lj), we need a total of a, mlmgmg multiplications. . There are five different ways to perform this multiplication: (A1A2)(A3A4)w ((A1A2)A3)A4= A1(A2(A3A4))a (AICA2A3DA41 AiifA2AsiA4i We can use the result of Exercise ‘23 to find the numbers of multiplications needed in these ﬁve cases. For
example, in the first case we need 10 A 2 A 5 : 100 multiplications to compute the 10 x 5 matrix AlAg,
5  20  3 z 300 multiplications to compute the 5 x 3 matrix A3A4, and then 10 53 2 150 multiplications to
multiply these two matrices together to obtain the final answer. This gives a total of 100 + 300 + 150 : 550
imiltipiications. Similar calculations for the other four cases yield 10 v 2  5 + 10  5  20 + 10  20 ' 3 : .1700,
5203+‘253+ 1023 :390, 2520+10220+10203 = 1200, and 2520+2203+102: : 380,
respectively. The winner is therefore A1((A2A3)A4), requiring 380 multiplications. Note that the worst arrangement requires 1700 multiplications: it will take over four times as long. Section 3.8 27. 29. 31. 33. 37. Matrices 1 1 1 Using the idea in Exercise 26, we see that the given system can be expressed as AK 2 B1 where A is the
coefficient matrix, X is an n. x 1 matrix with (Hi the entry in its 1"“ row, and B is the n x 1 matrix of
righthand sides. Specifically we have 7 —8 5 :11 5
—4 5 *3 m2 : —3
1 l l .733 0
If we can find the inverse A‘l, then we can ﬁnd X simply by computing A‘IB. But Exercise 18 tells us
‘2 3 ——1
that. A‘I : l 2 l . Therefore
~1 —l 3
2 i —l 5 1
X : 1 2 1 —3 = #1
—l —l 3 0 —2
We should plug in 321 = 1, 3:2 = —1, and 9:3 : —2 to see that these do indeed form the solution. These routine exercises simply require application of the appropriate deﬁnitions. Parts (a) and (b) are entry
wise operations, whereas the operation Q in part (c) is similar to matrix multiplication (the (1', 51')”1 entry of
A OB depends on the i” row of A and the jth column of B). 1 1 1 0 0 1 1 1 1
a) AVE: 1 1 1 b) AAB: 1 0 0 c') AGE: 1 1 1
1 0 1 0 0 1 1 0 1 Note that Al21 means A Q A, and AB] means A (D A (j A. We just apply the deﬁnition. 100 100 100
a)A[21= 1 10 b) AIR]: 10 1 c) AVAMVAW: 11 1
101 110 111 These are immediate from the commutativity of the corresponding logical operations on variables.
a) AVB:[aijvbij]:[bijVaij]=B\/A
b) BAA:[blj/\a,3]=[aijAbijleAB . These are immediate from the distributivity of the corresponding logical operations on variables. a.) A V /\ 2 [EU V (1735;; Aij)] = [((lij Vbij) A0115 V CHM : l'\ V b) AA V : [aij /\ (by Vetﬂ] = [(01,] /\ bij) V (at; AOL?” = AB) V The proof is identical to the proof in Exercise 13, except that real number multiplication is replaced by A,
p k and real number addition is replaced by V. Brieﬂy, in symbols, A G (B Q C) : (IIq /\ ( V bqr A (5.51)] :
q:l r:] P k :7 V (Iiq A bqy. ACTj] = i V ( V am A by) A ed] : (A 0B) Q C. 1‘2] q=l p k k [V V aiqAbqrAcrji I q21r=l 7'=1q=l ...
View Full
Document
 Spring '09
 Ming

Click to edit the document details