This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: INNERPRODUCT SPACES, QRTHOGONAL BAbbo,
AND PROJECTIONS (OPTIONAL) Up to now we have considered a vector space solely as an entity with an algebraic
structure. We know, however, that R" possesses more than just an algebraic structure;
in particular, we know that we can measure the Size or length of a vector x in R" by the
quantity x = VxTx. Similarly, we can deﬁne the distance from x to y as "x — y”. The
ability to measure distances means that R" has a geometric structure, which supplements
the algebraic structure. The geometric structure can be employed to study problems of
convergence, continuity, and the like. In this section we brieﬂy describe how a suitable
measure of distance might be imposed on a general vector space. Our‘development will
be brief, and we will leave most of the details to the reader; but the ideas parallel those
in Sections 3.6 and 3.8—3.9. InnerProduct Spaces To begin, we observe that the geometric structure for R" is based on the scalar product
xTy. Essentially the scalar product is a realvalued function of two vector variables:
Given x and y in R", the scalar product produces a number xTy. Thus to derive a
geometric structure for a vector space V, we should look for a generalization of the
scalarproduct function. A consideration of the properties of the scalarproduct function
leads to the deﬁnition of an innerproduct function for a vector space. (With reference
to Deﬁnition 7, which follows, we note that the expression uTv does not make sense in
a general vector space V. Thus not only does the nomenclature change—scalar product
becomes inner product—but also the notation changes as well, with (u, v) denoting the
inner product of u and v.) DEFINITION 7 I ‘ ‘ EXAMPII; 1 Solution , An mner product on a real vector space V 18 a function that assigns a real number,
» (u, v), to each pair of vectors u and v in V, and that satisﬁes these properties ‘ 71. (u uI_ > Oand (mu) =0ifand only 1fu=ag
2.(uv)= (v,.u) ' " “ 3. (au, vI =a(u, vI. '4. (u, v+w)= (u v)+(u,w). The usual scalar product in R" is an inner product in the sense of Deﬁnition 7,
where (x, y) = xTy. To illustrate the ﬂexibility 'of Deﬁnition 7, we also note that there
are other sorts of inner products for R". The following example gives another inner
product for R2. Let V be the vector space R2, and let A be the (2 x 2) matrix A=I::I Verify that the function (u, v) = uTAv is in inner product for R2. Let u be a vector in R2: Then (qu—uTAu——[ I32 u'
a —' —ulau2 2 4 “2 9 so (u, u) = 3a? + 4uu2 + 4a; 2 zuf + (u, + 2u2)2. Thus (u, u) 3 0 and (u, u) = o
if and only if u1 = uz = 0. This shows that property 1 of Deﬁnition 7 is satisﬁed. To see that property 2 of Deﬁnition 7 holds, note that A is symmetric; thatis, AT = A.
Also observe that if u and v are in R2, then uTAv is a (1 x 1) matrix, so (uTAv)T = uTAv.
It now follows that (u, v) = uTAv = (uTAv)T = vTAT(uT)T = vTATu = (v, 11). Properties 3 and 4 of Deﬁnition 7 follow easily from the properties of matrix mul
tiplication, so (u, v) is an inner product for R2. II! In Example 1, an inner product for R2 was deﬁned in terms of a matrix A:
(u, v) = uTAv.
In general, we might ask the following question: “For what (n x n) matrices, A, does the operation uTAv deﬁne an
inner product on R" ? ” IEXAMPIL 2 Solution Figure 5.5 The value (p, p) is
equal to the area under
the graph of y = p(t)2. IEXAMPI..L'~ 3 i Solution The answer to this question is suggested by the solution to Example 1. In particular (see
Exercises 3 and 32), the operation (u, v) = uTAv is an inner product for R" if and only
if A is a symmetric positivedeﬁnite matrix. There are a number of ways in which inner products can be deﬁned on spaces of
functions. For example, Exercise 6 will show that (P, (I) = P(0)4(0) + P(1)(I(1) + P(2)q(2) deﬁnes one inner product for 732. The following example gives yet another inner product
for 792. For p(t) and q(t) in P2, verify that l
(p. q) =f0 p(t)q(t)dt is an inner product. To check property 1 of Deﬁnition 7, note that l
(m) = [0 pom and p(t)2 2 0 for O E t 5 1. Thus (p, p) is the area under the curve p(t)2, O 5 t 5 1.
Hence (p, p) 3 O, and equality holds if and only if p(t) = 0, 0 5 t S 1 (see Fig. 5.5). Properties 2, 3, and 4 of Deﬁnition Tare straightforward to verify, and we include
here only the veriﬁcation of property 4. If p(t), q(t), and r(t) are in ’P2, then 1 l
<11.q+r)=f0 p(t)[q(t)+r(t)]dt=/0 [p(t)q(t)+p(t)r(t)]dt r r
= [0 p(t)q(t)dt + / p(t)r(t)dt = (1M1) + (p, r),
o
as required by property 4. ﬂ After the key step of deﬁning a vectorspace analog of the scalar product, the rest
is routine. For purposes of reference we call a vector space with an inner product an
innerproduct space. As in R", we can use the inner product as a measure of size: If V
is an inner~product space, then for each v in V we deﬁne “VII (the norm of v) as "VII = (V. V). Note that (v, v) 2 0 for all v in V, so the norm function is always deﬁned. Use the inner product for P2 deﬁned in Example 2 to determine III2 II. By deﬁnition, "t2” = ,/(12,t2). ”But (t2,t2) = foltztzdt = fgﬁdt = 1/5, 50
ut2n=1N3 It Before continuing, we pause to illustrate one way in which the innerproduct space framework is used in practice. One of the many inner products for the vector spaCe
C[O, l] is I
(f. g) = [0 f(x)g(x)dx I I I lllEORlL—M 10 II I 'IiiLiORLM 11 II I lExmwt L «l Solution If f is a relatively complicated function in CIO, l], we might wish to approximate
f by a simpler function, say a polynomial. For deﬁniteness suppose we want to ﬁnd a
polynomial p in ’P2 that is a good approximation to f. The phrase “good approximation”
is too vague to be used in any calculation, but the innerproduct space framework allows
us to measure size and thus to pose some meaningful problems. In particular, we can
ask for a polynomial p* in ’P2 such that llf — 11*" S llf — PM for all p in ’P2. Finding such a polynomial p* in this setting is equivalent to minimizing l
/ [rm — 12(an dx
0
among all p in ’P2. We will present a procedure for doing this shortly. Orthogonal Bases If u and v are vectors in an innerproduct space V, we say that u and v are orthogonal
if (u, v) = 0. Similarly, B = {V1, V2, . . . , VP] is an orthogonalset in V if (v;, vj) = 0
when 1' 9E j. In addition, if an orthogonal set of vectors B is a basis for V, we call B
an orthogonal basis. The next two theorems correspond to their analogs in R", and we
leave the proofs to the exercises. [See Eq. (5a), Eq. (5b), and Theorem 14 of Section 3.6.] Let B = {v., V2, . . . , V") be an orthogonal basis for an innerproduct space V. If u is
any vector in V, then
(VI : u) (V2, u) (v!!! u) u: v + v +~+
(vlsvl) I (V2.V2) 2 (Vnﬂ’n) q GramSchmidt Orthogonalization Let V be an innerproduct space, and let {u}, u2, . . . , u,,} be a basis for V. Let v. = u], and for2 5 k s n deﬁne vk by
(uk,Vj>
v =u —
k k Z< (WWW) V,
Then {v., V2, . . . , v,,} is an orthogonal basis for V. lﬂ Let the inner product on ’P2 be the one given in Example 2. Starting with the natural basis
{1, x, x2}, use Gram—Schmidt orthogonalization to obtain an orthogonal basis for P2. If we let {p0, pl, {)2} denote the orthogonal basis, we have po(x) = 1 and ﬁnd p1(x)
from (pa, x) (no, Po) p106) =x — 130(30 We calculate l l
(po.x)=/ xdx=1/2and<po,po>=f dx=1;
0 0 so [)1 (x) = x — 1/2. The next step of the Gram—Schmidt orthogonalization process is to form
J2 ( ,xz
mm =x2— (p' ) p1(x)— ”0 )po(X)
(171,111) (190,100)
The required constants are
(Ml: / (x —x 2/2)dx= l/12 (PhPl)=/ (xzx+1/4)dx=1/12
0 l
(po,x2)=fx2dx=1/3 0 1
(Demo) =/ dx: 1.
0 Therefore, p2(x) = x2 — p1(x)  po(x)/3 = x2  x + 1/6, and {100. pr, [92} is an
orthogonal basis for 792 with respect to the inner product. d I l I EXAMPLE 5 Let B: {p0, p12, p2} be the orthogonal basis for P2 obtained 1n Example 4. Find the
coordinates of x2 relative to B. Solution By Theorem 10, x2 = aopo(x) + a] p. (x) + azp2(x), where do 2 (p09 x2)/(p01 p0)
= (plax2)/(plapl) 02 = (P2, le/(Pz, P2)
The necessary calculations are l
(1’0.le =/ xzdx = 1/3
0
l
(p1,x2)=/ [x3—(l/2)x2]dx= 1/12
0
l
(p2,x2)=/ [x4—x3+(1/6)x2]dx= 1/180
0
l
(Po»Po)=f dx=l
0 .
I
(p1,p.>=/[xz—x+1/41dx=1/12
0
, 1
(p2,p2)=/ [xZ—x+1/6]2dx=l/180.
0 Thus a0 = 1/3, (11 = l, and a2 = 1. We can easily check that x2 = (1/3)p0(x) +
P10?) + p2(x). ' THEOREM 12 i ’ THEOREM 13 Orthogonal Projections We return now to the previou'sly discussed problem of ﬁnding a polynomial p* in 732
that is the best approximation o'fa function f in C [0, 1]. Note that the problem amounts
to determining a vector p* in a subspace of an innerproduct space, where p“ is closer
to f than any other vector in the subspace. The essential aspects of this problem can be stated formally as the following general problem:
Let V be an innerproduct space and let W be a subspace of V. Given a vector v in
V, ﬁnd a vector w* in W such that ”v — w*ll s ”V — w" for all w in W. (1) A vector w* in W satisfying inequality (1) is called the projection of v onto W, or
(frequently) the best leastsquares approximation to v. Intuitively w* is the nearest
vector in W to v. The solution process for this problem is almost exactly the same as that for the least
squares problem in R". One distinction in our general setting is that the subspace W
might not be ﬁnite dimensional. If W is an inﬁnitedimensional subspace of V, then
there may or may not be a projection of v onto W. If W is ﬁnite dimensional, then a
projection always exists, is unique, and can be found explicitly. The next two theorems
outline this concept, and again we leave the proofs to the reader since they parallel the
proof of Theorem 18 of Section 3.9. Q Let V be an innerproduct space, and let W be a subspace of V. Let v be a vector in V,
and suppose w* is a vector in W such that (v —— w“, w) = 0 for all w in W.
Then v — w* 5 [IV — W" for all w in W with equality holding only for w = w*. d
Let V be an innerproduct space, and let v be a vector in V. Let W be an ndimensional
subspace of V, and let {ul , uz, . . . , u,,} be an orthogonal basis for W. Then
llv — w*ll 5 ”V — W“ for all w in W
if and only if W*= (V,ll]) “1+ (V,IJ2> “2+”.+ <V,lln>
(“1,1“) (U2. U2) ,,. 2
(u,., u.) (.3. In view of Theorem 13, it follows that when W is a ﬁnitedimensional subspace of
an inner—product space V, we can always ﬁnd projections by ﬁrst ﬁnding an orthogonal
basis for W (by using Theorem II) and then calculating the projection w* from Eq. (2). To illustrate the process of ﬁnding a projection, we return to the innerproduct space
C [0, l] with the subspace 792. As a speciﬁc but rather unrealistic function, f, we choose
f (x) = cosx, x in radians. The inner product is l
(f, g) = [0 f(x)g(x) dx. 8 Chapter 5 l EXAMPLE 6 Solution I EXAMPLE 7 Vector Spaces and Linear Transformations In the vector space C[0, 1], let f (x) = cos x. Find the projection of f onto the sub
Space ”Pg. Let {p0, p1, p2} be the orthogonal basis for P2 found in Example 4. (Note that the inner
product used in Example 4 coincides with the present inner product on C [0, 1]. By
Theorem 13, the projection of f onto P2 is the polynomial p* deﬁned by (f, P0) (f. PI) (f. P2)
(P0. P0) p000 + (PI, Pl)pl(X) + (.172. P2) P2(x). p*(x) =
where 1
(f. p0) =/ cos(x)dx 2 .841471
0
1
(f, P1) = f (x — 1/2) cos(x)dx 2 .038962
0 1
(f, P2) = f (x2  x + 1/6) cos(x) dx 2 —.002394.
0 From Example 5, we have (P0. P0) = 1,(P1.P1) = 1/12. and (P2, P2) = 1/130.
Therefore, p*(x) is given by p*(x) = (f, po)po(x) + 12<f. p:)p:(x) + 180(f. p2)p2(x)
~_~ .841471p0(x) — .467544p. (x) — .430920p2(x). In order to assess how well [7* (x) approximates cosx in the interval [0, 1], we can
tabulate p*(x) and cosx at various values of x (see Table 5.1). H The function Si(x) (important in applications such as optics) is deﬁned as follows: x .
Si(x) =f smudu, forx 91E 0. (3)
0 ll The integral in (3) is not an elementary one and so, for a given value of x, Si(x) must
be evaluated using a numerical integration procedure. In this example, we approximate Table 5.1
x p*(x) cos x p*(x) — cos x
0.0 1.0034 1.000 .0034
0.2 .9789 .9801 —.0012
0.4 .9198 .9211 —.QOl3
0.6 .8263 .8253 .0010
0.8 .6983 .6967 .0016 1.0 .5359 .5403 — .0044 5.6 InnerProduct Spaces, Orthogonal Bases, and Projections (Optional) 399 Si(x) by a cubic polynomial for 0 5 x s 1. In particular, it can be shown that if we
deﬁne Si(O) = 0, then Si(x) is continuous for all x. Thus we can ask: “ What is the projection of Si(x) onto the subspace ”P3 of C [0, 1]?”
This projection will serve as an approximation to Si(x) for 0 s x 5 1. Solution We used the computer algebra system Derive to carry out the calculations. Some of
the steps are shown in Fig. 5.6. To begin, let {p0, pl, p2, p3} be the orthogonal basis
for 793 found by the GramSchmidt process. From Example 4, we already know that 17: (3) P3 (x) dx 18:
2800 1 x
SIN
49: I 180 92 (x) I J (in 6::
0 0 u 50 : 0 . 0804033 1 x
51: I 2800 23 (x) I M du dx
0 0 u 52: 0 . 0510442 Figure 5.6 Some of the steps used by Derive to generate the projection
of Si(x) onto P3 in Example 7 buapuu J VCLIUI DPGLCD auu .IJIIICCII llallDlUl ul‘auuua po(x) = l, p1(x) = x — 1/2, and p2(x) = x2 —x + 1/6. To ﬁnd p3, we ﬁrst calculate
the inner products (p09 x3)» (pl! x3), (p2! x3) (see steps 6—9 in Fig. 5.6 for (p1, x3) and (p2, x3)).
Using Theorem 11, we ﬁnd p3 and, for later use, (p3, p3): P306) .= x3 — (3/2)x2 + (3/5)x — 1/20
(P3. P3) = 1/2800 (see steps 15—18 in Fig. 5.6). Finally, by Theorem 13, the projection of Si(x) onto 733 is
the polynomial p* deﬁned by (Si. p0) (Si. p1) (Si. :2» (Si. p3)
(pa. pom“) + (pl. p.) m“) + (m. m) mm + (m. p3) = (Si. po)po(x) + 12(Si. pi>p1(x) + 180(Si, p2)p2(x) + 2800(Si. p3)p3(x). p*(X) = p3(x) In the expression above for p*, the inner products (Si, pk) for k = 0, 1, 2, and 3 are
given by sin u l l x
(Si, pk) =f pk(x)Si(x) dx = f pk(x) [f ———du} dx
0 o 0 u (see steps 49—52 in Fig. 5.6 for 180(Si, p2) and 2800(Si, p3)). Now, since Si(x) must be estimated numerically, it follows that the inner products
(Si, pk) must be estimated as well. Using Derive to approximate the inner products, we
obtain the projection (or best leastsquares approximation) p*(x) = .486385p0(x) + .951172p1(x) —— .0804033p2(x)  .0510442p3(x). To assess how well p* (x) approximates Si(x) in [0, 1], we tabulate each function at a
few selected points (see Table 5 .2). As can be seen from Table 5.2, it appears that p*(x) is a very good approximation to Si(x). I:
Table 5.2 x P*(x) Si(x) p*(x)  Si(x) 0.0 .000049 .000000 .000049 0.2 . 199578 . 199556 .000028 0.4 .396449 .396461 —.00001 2 0.6 .588113 .588128 —.000015 0.8 .7721 19 .772095 .000024 1.0 .946018 .946083 —.000065 6. Prove that (p,q) = ;; be EXERCISES 1. Prove that (x. y) = 4x1y1 +x2y2 is an inner product
on R2, where x = and y = .
x; 3’2 2. Prove that (x, y) = aJXIYI +azx2y2 +  ' ' +a,.x,.y,. is an inner product on R", where a] , a2, . .
positive real numbers and where .,a,,are x = [x.,x2,...,x,,]T and y = [ylty29 unhir 3. A real (n x n) symmetric matrix A is called positive
deﬁnite if xTAx > 0 for all x in R", x 7E 0. Let A
be a symmetric positivedeﬁnite matrix, and verify
that (x, y) = xTAy deﬁnes an inner product on R"; that is, verify that
the four properties of Deﬁnition 7 are satisﬁed. 4. Prove that the following symmetric matrix A is pos
itive deﬁnite. Prove this by choosing an arbitrary
vector x in R2, x 7e 0, and calculating XTAX. A=li il 5. In ”P; let p(x) = ao + alx +a2x2 and q(x) ='bo +
blx+b2x2. Prove that (p, q) = aobo+a1b1+a2b2
is an inner product on P2. P(0)q(0) + p(l)q(1) +
p(2)q (2) is an inner product on P2. 7. Let A = (afj) and B = (by) be (2 x 2) matrices.
Show that (A. B) = 0111711+atzbtz+azlbzl+azzb22
is an inner product for the vector space of all (2 x 2)
matrices. 8. Forx = [1, —2]T andy = [0, HT in R2, ﬁnd (x, y),
llxll, ﬂy", and {IX — yll using the inner product given
in Exercise 1. 9. Repeat Exercise 8 with the inner product deﬁned in
Exercise 3 and the matrix A given in Exercise 4. 0. In ’Pg let p(x) = —1 + 2x + x2 and q(x) =
1  x + 2x2. Using the inner product given in Ex ercise 5. ﬁnd (P. 4). Ilpll. llqll. and HP  (Ill 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. Repeat Exercise 10 using the inner product deﬁned
in Exercise 6. Show that {1, x, x2} is an orthogonal basis for “P;
with the inner product deﬁned in Exercise 5 but not
with the inner product in Exercise 6. In R2 let S = {x: {lxll = 1}. Sketch a graph ofS if
(x, y) = xTy. Now graph S using the inner product
given in Exercise 1. Let A be the matrix given in Exercise 4, and for x, y
in 1'?2 deﬁne (x, y) = xTAy (see Exercise 3). Start
ing with the natural basis {e1, e2), use Theorem 11
to obtain an orthogonal basis {ul , uzl for R2. Let {uh uz} be the orthogonal basis for R2 obtained
in Exercise 14 and let v = [3, 417. Use Theorem 10
to ﬁnd scalars a1, a; such that v = alu; + azm. Use Theorem 11 to calculate an orthogonal basis
{ p0, pl, 112) for 'P2 with respect to the inner product
in Exercise 6. Start with the natural basis {1, x, x2}
for 772. Use Theorem 10 to write q(x) = 2 + 3x — 4x2 in
terms of the orthogonal basis {p0, pl, 17;} obtained
in Exercise 16. Show that the function deﬁned in Exercise 6 is not
an inner product for ’P3. [Hint: Find p(x) in ’P3 such
that (p, p) = 0, but p 79 0.] Starting with the natural basis {1, x, x , x3, x4},
generate an orthogonal basis for 734 with respect to
the inner product 2 If V is an innerproduct space, show that (v, 0) = 0
for each vector v in V. Let V be an innerproduct space, and let u be a vec
tor in V such that (u, v) = 0 for every vector v in
V. Show that u = 0. Let a be a scalar and v a vector in an innerproduct
space V. Prove that Ilavll = laillvll. Prove that if {V1, v2, . . . , vk} is an orthogonal set of
nonzero vectors in an innerproduct space, then this
set is linearly independent. Prove Theorem 10. 402 25. 26. 27. 29. Chapter 5 Vector Spaces and Linear 'h'ansformations Approximate x3 with a polynomial in P2. [Hint
Use the inner product I
(12.4) =fo p(t)q(t)dr. and let [p0, pl, 17;} be the orthogonal basis for P2
obtained in Example 4. Now apply Theorem 13.] In Examples 4 and 7 we found p0(x), . . . , p3(x),
which are orthogonal with respect to l
(f. 3) =1) f(x)g(x)dx. Continue the process, and ﬁnd p4(x) so that
{po, 1)], 174} is an orthogonal basis for ’P4.
(Clearly there is an inﬁnite sequence of polynomials
[70, pl. ..., pn, ...that satisfy I
[0 p:(X)p,(x)dx = 0, iyéj. These are called the Legendre polynomials.) With the orthogonal basis for 793 obtained in Ex
ample 7, use Theorem 13 to ﬁnd the projection of
f (x) = cosx in 793. Construct a table similar to
Table 5.1 and note the improvement. . An inner product on C[~—l, l] is 2 ' f(X)g(x)
’ = — ———d .
(f g) 7’ fl m x Starting with the set {1, x,x2,x3, . . .1, use
the Gram—Schmidt process to ﬁnd polynomials
To(x), T. (x), T2(x), and T3(x) such that (7}, Tj) =
0 when 1‘ 7E j. These polynomials are called the
Chebyshev polynomials of the ﬁrst kind. [Hint
Make a change of variables x = cos 6.] A sequence of orthogonal polynomials usually sat
isﬁes a threeterm recurrence relation. For example,
the Chebyshev polynomials are related by Tn+1(x) = 2xT,,(x) — Tn_‘(x), n = 1, 2, . . . ,
(R)
where To(x) = 1 and T;(x) = x. Verify that
the polynomials deﬁned by the relation (R) above
are indeed orthogonal in C[—1, 1] with respect to
the inner product in Exercise 28. Verify this as follows: a) Make the change of variables x = cos 0, and
use induction to show that Tk (cos 0) = cos k0,
k = 0, l, . . . , where Tk(x) is deﬁned by (R). 30. 31. 32. b) Using part a), show that (7}, Tj) = 0 when
i 9E j. c) Use induction to show that Tux) is a polyno
mial ofdegreek, k = 0, 1, . (1) Use (R) to calculate T2, T3, T4, and T5. Let C[— l, 1] have the inner product of Exercise 28,
and let f be in C[—l, 1]. Use Theorem 13 to prove
that llf  P*l S Ilf  pll fora“ p in Pa if a: (10 '1
P (x) = 3 + ;0j7j(x). whereaj=(f,Tj),j=0,1,...,n. The iterated trapezoid rule provides a good esti~
mate of f: f (x)dx when f (x) is periodic in [0, b].
In particular, let N be a positive integer, and let
h : (b—a)/N. Next,deﬁnex; byxi = a+ih,i =
0, l, ..., N, and suppose f(x) is in C[a, b]. Ifwe
deﬁne A( f ) by I N" h
A(f) = grace) +1. ; f(xj) + 5mm. then A( f ) is the iterated trapezoid rule applied to
f (x). Using the result in Exercise 30, write a com
puter program that generates a good approximation
to f(x) in C[—l, I]. That is, for an input function
f (x) and a speciﬁed value of n, calculate estimates
ofao, a], . . . , an, where ak = (f, Tk) 2’ A(ka) To do this calculation, make the usual change of
variables x = c059 so that 2 7!
ak = E] f(cosO)cos(k0)d0, k = 0,1,...,n.
0 Use the iterated trapezoid rule to estimate each ak.
Test your program on f (x) = ez" and note that
(R) can be used to evaluate p*(x) at any point x in [—1.1]. Show that if A is a real (n x n) matrix and if the
expression (u, v) = uTAv deﬁnes an inner product
on R", then A must be symmetric and positive def
inite (see Exercise 3 for the deﬁnition of positive
deﬁnite). [Hint Consider (e;, e j).] ...
View
Full Document
 Fall '11
 OSU

Click to edit the document details