This preview shows pages 1–14. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: ~r. nu. ; of; f ‘ 3.?0 [J72 A] LINEAR INDEPENDENCE, BASES,
AND COORDINATES One of the central ideas of Chapters 1 and 3 is linear independence. As we will see, this
concept generalizes directly to vector spaces. With the concepts of linear independence
and spanning sets, it is easy to extend the idea of a basis to our vectorspace setting. The
notion of a basis is one of the most fundamental concepts in the study of vector spaces.
For example, in certain vector spaces a basis can be used to produce a coordinate system
for the space. As a consequence, a real vector space with a basis of n vectors behaves
essentially like R". Moreover, this coordinate system sometimes permits a geometric
perspective in an otherwise nongeometric setting. Linear Independence We begin by restating Deﬁnition ll of Section 1.7 in a general vectorspace setting. DEFINITION 4 Let V be a vector space, and let {v1, v2, . . . . vp} be a set of vectors in V. This set
is linearly dependent if there are scalars al, a2, . . . , a p, not all of which are zero,
such that alvl+a2V2++apv,,=0. (II The set {v1 , v2, . . . , v,,} is linearly independent if it is not linearly dependent; that
is, the only scalars for which liq. (1) holds are the scalars a, = a2 = ~   = (1,, z 0. Note that as a consequence of property 3 of Theorem 1 in Section 5.2, the vector equation (1) in Deﬁnition 4 always has the trivial solution a. = a2 = = (1,, = 0. c _ Thus the set (VI, vz. . . . . VP} is linearly independent if the trivial solution is the only
‘,solution to Eq. (I). If another solution exists, then the set is linearly dependent. 376 Chapter 5 I l EXAMPLle Solution Vector Spaces and Linear Transformations As before, it is easy to prove that a set {v., v2, .. . , v,,} is linearly dependent if and
only if some v, is a linear combination of the other p — l vectors in the set. The only
real distinction between linear independence/dependence in R” and in a general vector
space is that we cannot always test for dependence by solving a homogeneous system of
equations. That is, in a general vector space we may have to go directly to the deﬁning equation
(1V+ (12V; + ' ' ' + apr = 0 and attempt to determine whether there are nontrivial solutions. Examples 2 and 3
illustrate the point. Let V be the vector space of (2 x 2) matrices, and let W be the subspace 0 (1'2 W={A:A=:
a2] 0 , an and a2] any real scalars}. Deﬁne matrices BI, 32, and B3 in W by B 0 2 B O l d B 0 2 = , = , an = . ' 1 0 2 0 0 3 3 0 Show that the set {81, 32, B3} is linearly dependent, and express 133 as a linear combi nation of B. and 82. Show that {31, [32} is a linearly independent set. According to Deﬁnition 4, the set {Hp 82, B3} is linearly dependent provided that there
exist nontrivial solutions to the equation 6113140ng + (1333 = 0, (2) where (9 is the zero element in V [that is, O is the (2 x 2) zero matrix]. Writing Eq. (2)
in detail, we see that al, a2, a; are solutions of Eq. (2) if 0 20 I 0 (12 0 203 0 O
+ + = 
a1 0 0 0 303 0 0 0
With corresponding entries equated, (1., (12, (13 must satisfy 2a1+az +2a3 =0 and a1+3r13 =0. This (2 x 3) homogeneous system has nontrivial solutions by Theorem 4 of Section L3,
and one such solution is a I = —3, a; = 4, a3 = 1. In particular, —331 +432 + 33 = 0; (3) so the set {B, 82, B3} is a linearly dependent set of vectors in W. It is an immediate
consequence of Eq. (3) that B3 = 3131— 432. To see that the set {Bl , 132} is linearly independent, let a] and (12 be scalars such that
alBl + (1sz = 0. Then we must have 2a1+a2 =0 and a. :0. [EXAMPLE 2 Solution [Examine 3 Solution 5.4 Linear Independence, Bases, and Coordinates 377 Hence a. = 0 and a; = 0; so ifa1Bl + asz = 0, then a, = a2 = 0. Thus (BI, 32} is
a linearly independent set of vectors in W. ‘1 Establishing linear independence/dcpendence in a vector space of functions such as
’P,, or C [(1, b] may sometimes require techniques from calculus. We illustrate one such technique in the following example. Show that {1, x, x2} is a linearly independent set in 792.
Suppose that (10, a, , a2 are any scalars that satisfy the deﬁning equation
a0 + alx + agx2 = 9(x), (4) where 6(x) is the zero polynomial. If Eq. (4) is to be an identity holding for all values
of x, then [since 6'(x) = 6(.t)] we can differentiate both sides of Eq. (4) to obtain a. + 202): = 6(x). (5)
Similarly, differentiating both sides of Eq. (5), we obtain
2a; :2 6 (x). (6) From Eq. (6) we must have (22 = 0. If a; = 0, then Eq. (5) requires a; = 0; hence in
Eq. (4), a0 = 0 as well. Therefore, the only scalars that satisfy Eq. (4) are a0 = a, =
a3 = 0, and thus {1, x, x2} is linearly independent in 732. (Also see the material on
Wronskians in Section 6.5.) 11 The following example illustrates another procedure for showing that a set of func
tions is linearly independent. Show that {J}, l/x, x2} is a linearly independent subset of CI], 10].
If the equation
alﬁ+a2(i/x)+a3x2 =0 (7) holds for all x, l _<_ x 5 10, then it must hold for any three values of x in the interval.
Successively letting x = l,x = 4, and x = 9 in Eq. (7) yields the system of equations (1 +
201+(1/4)(12 + 16613 = 0 (8)
3a1+(l/9)az + 8103 = 0. It is easily shown that the trivial solution al = a2 = a3 = 0 is the unique solution for
system (8). It follows that the set {ﬁ, 1/x,x2} is linearly independent. Note that a nontrivial solution for system (8) would have yielded no information
regarding the linear independence/dependence of the given set of functions. We could
have concluded only that Eq. (7) holds when x = l,x = 4, or x = 9. ii (12+ (13:0 VectorSpace Bases It is now straightforward to combine the concepts of linear independence and spanning
sets to deﬁne a basis for a vector space. 378 Chapter 5 Vector Spaces and Linear Transformations I ‘ DEFINITION 5 EXAMPLE AI Solution Let V be a vector space, and let B = {vb V2, . . . , v,,} be a spanning set for V. If
B is linearly independent, then B is abasis for V. Thus as before, a basis for V is a linearly independent spanning set for V. (Again
we note the implicit assumption that a basis contains only a ﬁnite number of vectors.) There is often a “natural” basis for a vector space. We have seen in Chapter 3 that
the set of unit vectors {eh e2, . . . , e,,} in R" is a basis for R”. In the preceding section
we noted that the set {1, x. x2} is a spanning set for 732. Example 2 showed further that
{1, x, x2} is linearly independent and hence is a basis for "P2. More generally, the set
{1, x, . . . , x"} is a natural basis for ’P,,. Similarly, the matrices E 10 E 01E 00 dE 00
II—00,12—00,2i—]0.an 22—0l constitute a basis for the vector space of all (2 x 2) real matrices (see Exercise 11). In
general, the set of (m x n) matrices {E04 1 5 i 5 m, l 5 j 5 n) deﬁned in Section 5.3
is a natural basis for the vector space of all (m x n) real matrices. Examples 5, 6, and 7 in Section 5.3 demonstrated a procedure for obtaining a
natural spanning set for a subspace W when an algebraic speciﬁcation for W is given.
The spanning set obtained in this manner is often a basis for W. The following example
provides another illustration. Let V be the vector space of all (2 x 2) real matrices, and let W be the subspace deﬁned
W a a+b W:MM=[
(1—!) b :l , a and b any real numbers}. Exhibit a basis for W. In the speciﬁcation for W, a and b are unconstrained variables. Assigning values a =
l, b = 0 and then a = O, b = 1 yields the matrices l l 0 l
B; = and 82:
l 0 —1 l
a a+b l l b 0 l
a—b b ‘a 1 o + —ll ’ the set {131, 132} is clearly a spanning set for W. The equation C131 +Csz = 0 (where O is the (2 x 2) zero matrix) is equivalent to . _C C+Cz 0 0
q—Q cz _ 0 o ' in W. Since TH 130nm ~l Proof 5.4 Linear Independence, Bases, and Coordinates 379 Equating entries immediately yields cl = c; = 0; so the set (BI, 32} is linearly inde
pendent and hence is a basis for W. 3] Coordinate Vectors As we noted in Chapter 3 a basis is a minimal spanning set; as such, a basis contains no redundant information. This lack of redundance is an important feature of a basis in the
general vectorspace setting and allows every vector to be represented uniquely in terms
of the basis (see Theorem 4). We cannot make such an assertion of unique representation about a spanning set that is linearly dependent; in fact, in this case, the representation is
never unique. . , vp} be a basis for V. For each vector
. , wp such that Let V be a vector space, and let 3 = {v., v2, ..
w in V, there exists a unique set of scalars 11)., w, . . w = wlvl + wzvz +    + wpvp.
Let w be a vector in V and suppose that w is represented in two ways as
w := wlv. + wzvz ++ wpvp
w = ulvl + uzvz +    +u,,v,,.
Subtracting, we obtain
0 =(w1 —ll1)V1+(w2  llz)V2 +   ' + (wp — up)v,,. Therefore, since {v1 , v2, . . . , vp} is a linearly independent set, it follows that wl — u; =
0, w; — 112 = 0, . . . , w,, —— up = 0. That is, a vector w cannot be represented in two
different ways in terms of a basis 13. ﬂ Now, let V be a vector space with a basis B = [v., v2, . . . . vp}. Given that each vector win V has a unique representation in terms of B as w := min + wzvz +    + wpvp, (9) it follows that the scalars w; , wg, . . . , wp serve to characterize w completely in terms
of the basis B. In particular, we can identify w unambiguously with the vector [w]B in
R”, where . wl wZ
[Win = "’11
We will call the unique scalars w] , wz, . . . , wp in Eq. (9) the coordinates of w with
respect to the basis B, and we will call the vector [wlg in R p the coordinate vector of
w with respect to B. This idea is a useful one; for example, we will show that a set
of vectors {u., 112, . . . , u,} in V is linearly independent if and only if the coordinate
vectors [u.]3, [[12]]3, . . . , [u,]3 are linearly independent in R". Since we know how to determine whether vectors in R I’ are linearly independent or not, we can use the idea of
coordinates to reduce a problem of linear independence/dependence in a general vector 380 Chapter 5 l l l EXAMPLE 5 Solution Vector Spaces and Linear Transformations space to an equivalent problem in R”, which we can work. Finally, we note that the
subscript B is necessary when we write [w]B, since the coordinate vector for w changes when we change the basis. Let V be the vector space ofall real (2 x 2) matrices. Let B = {Em En, En, En) and
Q = {Em 52., E12. E22}, where 10 E 01 E 00 dB 00
= = , = ,an _= .
E“ 00‘'2 00 2' 10 2° 01 Let the matrix A be deﬁned by 2 —l
A = .
Find [A]B and [A]Q. We have already noted that B is the natural basis for V. Since Q contains the same
vectors as B, but in a different order, Q is also a basis for V. It is easy to see that A = 21511 En — 352144522. SO Similarly,
A = 2E”  3521 En +4522. so
2 —3
—I
4 1 [A19 = It is apparent in the preceding example that the ordering of the basis vectors deter
mined the ordering of the components of the coordinate vectors. A basis with such an
implicitly ﬁxed ordering is usually called an ordered basis. Although we do not intend
to dwell on this point, we do have to be careful to work with a ﬁxed ordering in a basis. if V is a vector space with (ordered) basis B = {v}, vz. . . . , v,,}, then the
correspondence V —> ["13 provides an identiﬁcation between vectors in V and elements of R”. For instance, the
preceding example identiﬁed a (2 x 2) matrix with a vector in R4. The following lemma 5.4 Linear Independence, Bases, and Coordinates 381 lists some of the properties of this correspondence. (The lemma hints at the idea of an
isomorphism that will be developed in detail later.) ‘  LEMMA Let V be a vector space that has a basis 8 = {v., v2. . . . , vp}. If u and v are vectors in V and if c is a scalar, then the following hold:
in + v13 = [ulg + [via
and
[culs = dull;
Proof Suppose that u and v are expressed in terms of the basis vectors in B as u =01V+02V2++(1pvp and
V = blv. + bzvz + ' ' ' + prp.
Then clearly
u + v = (a. + b;)v1+(a2 +b2)vz + m+ (a,, + bp)v,,
and
cu = (ca.)v; + (ca2)vz +  ~  + (cap)v,,.
Therefore,
at bl
. b
[n13 = “,2 . MB = F ,
(1,, (2,,
a. + b[ ca;
[u + v];; = a2 b2 , and [cu]: caz
up + bp cap
We can now easily see that [u + v]3 = [1111; + [ﬂy and [cu]g = c[u]B. ﬂ The following example illustrates the preceding lemma. l EXAMPLE 6 In 732, let p(x) = 3 — 2x + x2 and q(x) = —2 + 3x — 4x2. Show that
[p(x) + q(x)]s = lp(x)ln + [q(x)ls and l2p(x)ls = 2[p(x)]s,
where B is the natural basis for ’sz B = {l,x, x2}.
Solution The coordinate vectors for p(x) and q(x) are
3 —2 ' . [p(x)]u= 2 and [q(x)]:;= 3
l —4 382 Chapter 5 ll l marM :3 Proof Vector Spaces and Linear Transformations Furthermore, p(x) + q(x) = l + x — 3x2 and 2p(x) = 6 — 4x + 2x2. Thus 1 6
[1700+ q(x)]n = 1 and [2p(x)]B = _4
3 2
Therefore, [P(.X) +Q(X)]B = [p(x)]B + [q(x)]B and [2p(x)]3 = 2[p(x)]B_ my Suppose that the vector space V has basis B = {vl,v2....,v,,}, and let
{u1,u2, . . . . um} be a subset of V. The two properties in the preceding lemma can
easily be combined and extended to give [Clul +Czu2 + ' ‘ ' +lelmla = Clllllls +Czlllle + ' " + lellmla. (10) This observation will be useful in proving the next theorem. LetS = Suppose that V is a vector space with a basis 8 = {vl,v;;_, ...,v,,}. tun. uz. ....um} be a subset of V, let T = “Ulla. [u2]B. . . . . [umlnl l. A vector u in V is in Sp(S) if and only if [u]g is in Sp(T). 2. The set S is linearly independent in V if and only if the set T is linearly inde
pendent in R”. The vector equation u=x1u1+x2u2+m+xmum (11) in V is equivalent to the equation ["13 = [Xlul +xzuz +   ' +Imlmla H3) in R”. It follows from Eq. (10) that Eq. (12) is equivalent to [Illa = «Yliulla + leuzla + ' ' ' + Xmlllmls (13) Therefore, the vector equation (1 1) in V is equivalent to the vector equation (13) in RP.
In particular, Eq. (11) has a solution x; = 61.x; = c2, . . .,x,,, = cm if and only if
Eq. (13) has the same solution. Thus u is in Sp(S) if and only if [u]3 is in Sp(T). To avoid confusion in the proof of property 2, let 9v denote the zero vector for V
and let 0,, denote the p—dimensional zero vector in R”. Then [Ht/)3 = 0”. Thus setting
u = 0V in Eq. (1 l) and Eq. (13) implies that the vector equations
av=x1ll1 +x2u3+~+xmum (l4)
and 0]) =xllu113 +x2[u2lB+"'+xm[um]B ([5) have the same solutions. In particular, Eq. ([4) has only the trivial solution if and only
if Eq. (15) has only the trivial solution; that is. S is a linearly independent set in V if and I only if T is linearly independent in RP.  . all An immediate corollary to Theorem 5 is as follows. l 1 Comme
I Proof EXAMPLE 7 Solution l t EXAMPLE 8 Solution 5.4 Linear Independence, Bases, and Coordinates 383 Let V be a vector space with a basis B = m, vz, . . . , vp}. Let S : {uh uz, . . . , um}
be a subset of V, and let T = {[uﬂg, [U2]3, . . . , [u,,,]g}. Then S is a basis for V if and
only if T is a basis for R”. By Theorem 5, S is both linearly independent and a spanning set for V if and only if T
is both linearly independent and a spanning set for R”. ﬂ Theorem 5 and its corollary allow us to use the techniques developed in Chapter 3
to solve analogous problems in vector spaces other than R”. The next two examples provide illustrations. Use the corollary to Theorem 5 to show that the set {1, l + x, l + 2x + x2} is a basis
for P2. Let B be the standard basis for P22 3 = {1, x, x2}. The coordinate vectors of l, l + x,
and l + 2x + x2 are l l
[1113: 0 , [1+x];;= 1
0 0 , and [l+2x+x2]3= 2 Clearly the coordinate vectors [1],}, [1 +x] 3, and [1 +2x +4313 are linearly independent
in R3. Since R3 has dimension 3, the coordinate vectors constitute a basis for R3. It
now follows that {1. l + x, l + 2x + x2} is a basis for 732. ﬁ Let V be the vector space of all (2 x 2) matrices, and let the subset S of V be deﬁned by S = {A1, A2, A3, A4}, where
0 —l
, = 1 12
A]:
—l 3
37
, and A4: .
—26 A l 0
3 — l — 10
Use the corollary to Theorem 5 and the techniques of Section 3.4 to obtain a basis for Sp(S).
If B is the natural basis for V, B = {E11, £12,521, E22}, then l 0
2 —l
[Ail/3 = _l . [A213 = .
3 4
—1 3
Malta = 0 , am] [A4l3 = 7 
l —2 "
—l0 6 384 Chapter 5 Vector Spaces and Linear Transformations Let T = {[A113, [A213, [A313, {/1413}. Several techniques for obtaining a basis for
Sp(T) were illustrated in Section 3.4. For example, using the method demonstrated in
Example 7 of Section 3.4, we form the matrix 10—1 3 2 —l O 7 C:
—l l l —2
3 4 —10 6 The matrix C T can be reduced to the matrix I 2 —l 3
Dr: 0 —l l 4
0 0 2 l
0 0 0 0
Thus
1 0 0 O
2 —l 0 O
D: ,
—l l 2 O
3 4 1 0 and the nonzero columns of D constitute a basis for Sp(T). Denote the nonzero columns
of D by w], W2, and W3, respectively. Thus 1 0 0 2 —l 0
w. = _l , w; = , and W3 = 2 , 3 4 l and {w1, w2, W3) is a basis for Sp(T). If Bi, 3:, and B3 are (2 x 2) matrices such
that [31],; = w, [82],; = W2, and [33],; = W3, then it follows from Theorem 5 that
(Bl, 3;, 33} is a basis for Sp(S). If 31 = E1+2512 — 521+ 3522. then clearly [31],; = WI. 32 and 33 are obtained in the same fashion, and B 12 B 0—1 dB 00 I“
'T—13'2_14’a"3_21' Examples 7 and 8 illustrate an important point. Although Theorem 5 shows that
questions regarding the span or the linear dependence/independence of a subset of V
can be translated to an equivalent problem in R”, we do need one basis for V as a point
of reference. For example, in P2, once we know that B = {1, x, x2} is a basis, we can
use Theorem 5 to pass from a problem in P2 to an analogous problem in R3. In order to
obtain the ﬁrst basis B, however, we cannot use Theorem 5. I EXAMPLE 9 Solution 5.4 Linear Independence, Bases, and Coordinates 385 In 734, consider the set of vectors S = {p1, p2, p3, p4, p5}, where p. (x) = x4 + 3x3 +
2x+4, p2(x) =x3—x2 +5x+ 1, p306) =x4+x+3, p4(x) = x4 +x3 —x+2, and
p5(x) = x4 + x2. Is S a basis for 774? Let B denote the standard basis for ’P4, 8 = {1,x. x2,x3,x“}. By the corollary
to Theorem 5, S is a basis for ’P4 if and only if T is a basis for R5, where T =
{[1111]}, [19213, [p3]3,[p4]3,[p513}. In particular, the coordinate vectors in T are 4 l 3
2 5 1
[P113 = 0 , [P213 = —1 a [[1318 = 0 .
3 l 0
l 0 l
2 0
—1 0
[P418 = 0 . and [P513 = 1
l 0
l 1 Since R5 has dimension 5 and T contains 5 vectors, T will be a basis for R5 if T
is a linearly independent set. To check whether T is linearly independent, we form
the matrix A whose columns are the vectors in T and use MATLAB to reduce A to
echelon form. As can be seen from the results in Fig. 5.4, the columns of A are linearly
independent. Hence, T is a basis for R5. Therefore, 5 is a basis for 774. ii >>rre£ (A) ans: Figure5.4 MATLAB was used for Example 9 to determine whether the
columns of A are linearly independent. Since A is row equivalent to the
identity, its columns are linearly independent. 386 Chapter 5 Vector Spaces and Linear Transformations EXERCISES In Exercises 1—4, W is a subspace of the vector space V
of all (2 x 2) matrices. A matrix A in W is written as [a b]
A: .
Cd In each case exhibit a basis for W.
l.W=(A:a+b+c+d=0}
2.W=[A:a=—d, b=2d, C=3d}
3.W={A:a=0}
4.W={A:b=a—c, d=20+c} In Exercises 5—8, W is a subspace of P2. In each case
exhibit a basis for W. 5. W = {p(x) = a0 +a1x +agx2: a2 = a0 — 2m}
6. W = {p(x) = (10 +alx + azxz: (10 = 302,
at = ‘a2}
7. W = {p(x) = no +alx + (12x22 p(O) = 0}
8. W = {p(x) : (to +01): +agx2: p(l) = p’(l) = O}
9. Find a basis for the subspace V of “P4, where V =
{p(x) in 7).: p(0) = 0, p'(l) = 0. p”(—l) = 0} 10. Prove that the set of all real (2 x 2) symmetric ma~
trices is a subspace of the vector space of all real
(2 x 2) matrices. Find a basis for this subspace (see
Exercise 26 of Section 5.3). 11. Let V be the vector space of all (2 x 2) real matrices.
Show that B = {5.}. E12, Ezl, E22} (see Example
5) is a basis for V. 12. With respect to the basis 8 = {1, x, x2} for P2, ﬁnd
the coordinate vector for each of the following.
a) p(x) =x2 —x+l
b) p(x) = x2 +4.r — l
c) p(x) = 2x + 5 13. With respect to the basis B = {E1.,E.2,E2., E22}
for the vector space V of all (2 x 2) matrices. ﬁnd
the coordinate vector for each of the following. a)A_2—1 bA_ 10
l‘32 )Z' —11
A_23
C) 3—00 14. Prove that {1, x, x2, . . . . x") is a linearly indepen—
dentset in 79,, by supposing that p(x) = 0(x), where p(x) = a0 +a1x +   +a,,.t‘". Next, take successive
derivatives as in Example 2. In Exercises 15—17, use the basis 8 of Exercise 11 and
property 2 of Theorem 5 to test for linear independence
in the vector space of (2 x 2) matrices. 15A 21 A 30
"—21’2‘02’ A ll
3— 21
13 4—2
16.24.: , A2: ,
21 0 6
6 4
A3:
2 2 14
17.111: , A2: ,
l3 0 5 In Exercises 18—21, use Exercise l4 and property 2 of
Theorem 5 to test for linear independence in 733. 18. (x3 —x,x2 — l,.r +4} 19. {x2 +2x — 1,.t‘2 —5x +2. 3x2 —x}
20. {x3 —x2,x2 —x,x — 1.x3 — l} 21. {19+ 1,.r2+ I,x+ 1, 1} 22. In 772.16I5 ='tp.(x). pztx), patx). mix». where
pi(x) = 1+ 2x + x2. 1120') = 2 + 5x. p300 =
3 + 7x + x2, and p4(x) = l + x + 3x2. Use the
method illustrated in Example 8 to obtain a basis for
Sp(S). [Hint Use the basis 8 = {1,x, x2} to ob
tain coordinate vectors for p1(x), p; (x), {)3 (x), and
[14(x). New use the method illustrated in Example 7
of Section 3.4.] 23. Let S be the subset of 7’; given in Exercise 22. Find
a subset of S that is a basis for Sp(S). [Hint: Proceed
as in Exercise 22, but u5e the technique illustrated
in Example 6 of Section 3.4.] 24. Let V be the vector space of all (2 x 2) matrices and
let S = {Al, A2, A3, A4}. where 25. 26. 27. 28. 29. 30. 31. 32. 5.4 Linear Independence, Bases, and Coordinates A I 2 A —2 1 " —1 3 ' 2‘ 2 —1 ’
A —1—1 d A —2 2
3‘ 1—3 "m 4” 2 0 ' As in Example 8, ﬁnd a basis for Sp(S). Let V and S be as in Exercise 24. Find a subset
of S that is a basis for Sp(S). (Hint: Use Theo
rem 5 and the technique illustrated in Example 6 of
Section 3.4.] In 772. let Q = {p1(x),p2(x).p3(x)}, where
p1(x) = l + X + 2x2, p2(x) = x + 3x2, and
[7300 = l+2x+8x2. Use the basis B = [l,x,x2}
to show that Q is a basis for P2. Let Q be the basis for P2 given in Exercise 26. Find
[p(x)]Q for [)(x) = 1 +x + x2. Let Q be the basis for P2 given in Exercise 26. Find
[p(x)]Q for p(x) = a0 + alx + azxz. In the vector space V of (2 x 2) matrices. let
Q = {A1, A2, A3. A4} where 10 l—l
A1: . A2: . 0 0 0 0
A  d A _3 0
3_ v an 4— 21  Use the corollary to Theorem 5 and the natural basis
for V to show that Q is a basis for V. With V and Q as in Exercise 29, ﬁnd [A]Q for “[13] Mth V and Q as in Exercise 29, ﬁnd [A]Q for A = .
c (I Give an alternative proof that {l,x,x2} is a lin
early independent set in P2 as follows: Let p(x) =
a0 + alx + agxz. and suppose that p(x) = 9(x).
Then p(—1) = 0, p(0) = 0, and p(1) = 0.
These three equations can be used to show that
00 = (1] = a2 = 0. 02
00 387 33. The set {sin x, cosx} is a subset of the vector space
C [—11, rt]. Prove that the set is linearly indepen '
dent. [Hint Set f(x) = Cl sinx + C2 cosx, and
assume that f (x) = 6(x). Then f (0) = 0 and f (IT/2) = 0]
ln Exercises 34 and 35. V is the set of functions V = U06)! f(x) = ae" + be” + w” + de‘k
for real numbers a, b, c. d}. It can be shown that V is a vector space. 34. Show that B = {e", e”, e”, e41} is a basis for v.
[Hint To see that B is a linearly independent set, let
h(x) = c;e‘ +cze7" +C3e3x +c'4e4Jr and assume that
h(x) = 0(x). Then h’(x) = 6(x),h”(x) = 0(x),
and h’"(x) = 0(x). Therefore, h(0) = 0, h’(0) =
O, h"(0) = 0, and h"'(0) = 0.] 35. Let S = {g1(x), g2(x), g3(x)} be the subset of V,
where g1(x) = e‘ — e4",g2(x) = e“ + e”. and
g3(x) = —e" + e3x + e“. Use Theorem 5 and
basis 8 of Exercise 34 to show that S is a linearly
independent set. 36. Prove that if Q 2 (vi, vz, ...,v,,.} is a linearly in
dependent subset of a vector space V, and if w is
a vector in V such that w is not in Sp(Q), then
{v}, V2, . . . , v,,,, w} is also a linearly independent set
in V. [Nom 9 is always in Sp(Q).] 37. Let S = {v.,v2,... , V”) be a subset of a vector
space V, where n 2 2. Prove that set S is linearly
dependent if and only if at least one of the vectors,
vj, can be expressed as a linear combination of the
remaining vectors. 38. Use Exercise 37 to obtain necessary and sufficient
conditions for a set (u, v} of two vectors to be lin
early dependent. Determine by inspection whether
each of the following sets is linearly dependent or
linearly independent. a) {l +x,x2}
b) [L e“}
c) {x, 3x} «nuz:
' an“ 0 0 2 —4
—2 —6 ll H H llé? ...
View Full
Document
 Fall '11
 OSU

Click to edit the document details