math119lecnotes-set005

math119lecnotes-set005 - 5.1 Cha. Pier 5 Infinite Series...

Info iconThis preview shows pages 1–15. Sign up to view the full content.

View Full Document Right Arrow Icon
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 4
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 6
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 8
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 10
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 12
Background image of page 13

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Background image of page 14
Background image of page 15
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 5.1 Cha. Pier 5 Infinite Series and Their Applications. The exceptional ability of Taylor polynomials to give excellent approximations of various functions led mathematicians to explore their power timber and further. Soon a pattern emerged. There were functions such as e", sin 7; and cos x where Taylor polynomials of higher and higher order were giving better and better approximations over larger and larger intervals. On the other hand, they found fiinctions such as 1—}; where the polynomials were able to do an excellent job in the neighborhood of x = 0, but were absolutely terrible near x = 1. It took a long time, but finally the mystery was explained; and the objective of this chapter is to give a summary of this story. §5.1 Pushing the Approximation to the Limit. Consider the function f: x l-> psin i; for x e R, and let us apply to it the Taylor formula (4.8) in the neighborhood of x0 = 0. Computing the derivatives, we find 0 f (°)(0) = f(0) = sinO = 0, 1 f “’(0) = cos 0 = l, 2 , f‘2’(0) = -sin(0) = o , 3 4 f (3)(0) =-cos(0) =-l, £00m) = sin(0) = o, k k k k k and fi’om here on the cycle repeats itself. Substituting these values into Taylor’s formula we obtain P1,o(X)=f(0)+f'(0)-X = X , PM (K) = f(0) +f'(0)- x +217f'(0)'- x2 = x , 19300:): ........................................ ..=x——§—!, 3 5 P50(x)= ......................................... ..=x—%+%!—, and so on. As you see, only the odd powers of x appear, and so Taylor’s formula may be written in general as 2k+x (5.4) sin x =g—nk (21”)! = Puma). Four of these are plotted in Fig. 5 .1, namely P1, P5, P9, and P19. They all do a marvelous job near the origin — even the linear approximation P1,o(x)! Hgfil 50?) But something miraculous happens: As we move away from the origin, the higher order polynomials are able to approximate sin x very accurately over larger and larger intervals. For instance, P19_o(x) is indistinguishable from sin x (at least on that graph) for —21r _<_ x S 27:. Now we knew fiom Chapter4 that the Taylor polynomials are extrapolating polynomials; starting with information known only at a point (here, the origin), they try to give us the values of the function away fiom it. But we expect them to fail fi‘om some value of x onwards. And some of them do‘ just that. For example, P9,o(x) is completely off the mark near x = 27:. The fimny thing is that when one of them fails to do the job, we can pick a higher order one to come to the rescue! And this pattern seems to hold no matter how far we go from the origin, as Fig.5 .2 shows. I Pno(X) [/{i . I Remark 1. Is this a fluke? Perhaps this amazing property of the Taylor polynomials holds only for f(x) = sin x. Try some other function — e. g., x 1—) cos x and x H e", for which the derivatives at x, = O are easy to find. Sketch the graphs, and you will be convinced that the same effect occurs! l Emboldened by this discovery, we now suspect that the Taylor polynomials can approximate any function. But before we jump to conclusions let us experiment further. 1 Consider the function f : x l--> l + X 0 , and let us develop the sequence of its Taylor polynomials around the origin: 1 :——=1 fan (0) l + O , f<‘>(0) = £0 + xy‘ x=0 =_(1+x)_2lx=o=_1: f<2>(0) = = 2(l+x)‘3 Inc: 2, f<3>(0) = —2-3(1+x)‘4|x=0=—2-3, f“’(0)=... =2-3-4(1+x)‘5[ o=2-3-4, and so on. It is obvious that f<k>(0) =1-2-3-. ...k-(—1)"; hence the Taylor formula (4. I3) simply gives (52) fi z fienkx“ = Flax). k=0 4. 1+ x 1tself(heavy line). And we get A few of these are plotted in Fig 5.3 (dotted curves) along with a big surprise! We see that all the polynomials, no matter To lor Polvnomial Pn x aproximations to l 41.: / 6. 6 what their order is, are only good for -1 <2: < 1 and nowhere else! ‘5) What in the world is happening? Why is sin x OK while 11—}; gives us trouble? Well, the function Ti—x is not defined at x = -1; so it seems to make perfect sense that we cant approximate something that does not exist. But wait, that cannot be the true explanation. Look on the right side of the origin; there is no singularity at x = 1, and yet the polynomials are exactly as lousy beyond that point! In fact, there are many other functions that show the 1 l+x same behavior, one of which is g(x) = which does not have any singularity at all, for 2 Clearly, something much more profound must be going on. And, to explore this, we will now consider Taylor’s Theorem with integral remainder, Eq. (4.20). Let us apply this theorem to sin x: _ _ n _ k x2k+l " (x _t)2n+1 d2n+2 _ (5.3) srnx—;( 1) (ZkH)! + £~—————(2n+l)! ——dt2n+2 smt dt. Pm+1,o(X) Ra” 1(X) a Note that the reminder is easy to compute, for 2n+2 5:2“, sintls l , and so 2n+2 x (x_t)2n+1 _ lxl (5'4) Rm“) S 2‘: (2n+l) dt ‘ (2n +2)! u, ‘0 Q’ This formula tells us exactly why the miracle we have been talking about occurs! Say x = 0.1; in other works, x is small. Then even a low degree polynomial will do a good job. For instance, for n = 1 we have 3 ‘ u- sin(0.l)=0.l—(Oélg) +R3L0-I) , ’RJoJflé (041‘) ) and so P3,o(0. 1) = 0.099833 approximates sin (0.1) with an error é 4 x 10“. (The calculator gives sin(O.1)= 0099833417.) Pretty good, eh? On the other hand, for x = 21: we have 3 4 sin21r=27t—(23T? +. R3075) ) {Rab-EMS m -35.06 + 64.94 fi This should be 0 very large error. This approximation is terrible! However, if we use P17,o(21r) the error is Big (2n)18 : 2.3x10” lR‘7(2TC)lS 18! 6.4x10” 20.036, much better! And, of course, things get really good with P19 , P21, and so on (see Fig. 6 .2). Questions: 1. Take x = “a gazillion”; can we find an 11 large enough to approximate sin (“a gazillion”) by PZDH (“a gazillion”) with a small error? 2. What happens to the remainder as n —> oo? 0 We shall take up these questions in the next section. 5,7 §6 .2 The Taylor Series for sin x. Back to Eq. (5 .4). What happens if we take the limit as n —> 00'? In other words, can we compute (for a given x) (5-5) At first it may seem plausible to guess that the value of the limit will depend on how large is x. For although it is obvious that lim (0'9) n—yao I]! =0; andsoon, ("a gazillion")n _ it is not so obvious that lim n! t — n—WJ On the other hand Take an arbitrary integer N< n. then we may write n!=1.2.3.....N.(N+1)(N+2).....(n_l)n W—J \_Y_J N! n — N factors and we have certainly (N+1)(N+2)--- (n-l)n2.N-N-N---- y=N"‘N. n—Nfactors Thus, nlleNn‘N, andso H; X” =N_". 1:! n n! ‘NIN'J'N N! N ' It follows that i x“ M [X n (5.6) Os—nl—s—W—(fij 5.8 Now, as we let n -——> 00, we have the following situation 0’ W N such that <1 . What happens when n -—> oo? Simply this: [ Eq. (5.6) we get O_<_lim'XI n—W Ill (5. 7) s o and so the limit must be zero. Applying this result to (5.4) gives immediately lim [ Rm] (x) I: 0 , since it does not matter whether the exponent is n or 2n + 2. What about R l R2n+l(x) is (2n+2)! and, again by the Squeeze Thm, we conclude that (58) fig; Rmx) = 0. Now we are home free! Take n ——> oo in Eq. (6.3): e l m N An instance of application of the Squeeze Theorem —-———< 3 (2n+2)!‘ sin x = lim P2n+1 0 (X) + Hm R2n+l (X): 11—») _' 11"” = P2n+l,0 (X) 5.? No matter how large x is, we can pick an integer N such that l x | < N < n , and hence ) —>0. Thustakingthe limit asn—woin (x) itself? Easy: 2n+l I x l2n+2 (2n+2)l’ R2n+l (x) S and remembering the definition of Ph+1,o(x), we may write this result as 2k+l X (2k+1)l (5.9) sin x = 11113; (—1)k k=0 J, for all x, This is simply amazing. Up to now we have been thinking of the definition of sin x geometrically, as the ratio of two lengths (see MEM , p. 112 , Ft'g. 1.45 . ) Now we have discovered that the sine function can also be defined as a limit of a sequence of polynomials! 0 Notation. In modern books the r.h.s. of (5.9) is written differently for the following reason: sin x = lim P2n+10(x) n—-)co ’ 2kl x + (2k+1)!] enlist (5.10) by universal convention. centered. 3': zero The expression (5 . 10) is called the Taylor serie/sAof the sine function; it is an instance of an infinite series. Ii: is also called. l-ke Maelautin Series. I WARNING. Superficially, it seems that (6 .10) is obtained from the preceding expression by “plugging in” so in place of n on top of the sigma symbol — i.e., by making the replacement 3 limit (6.9). u an rag... C0 2 . Nothing could be further from the truth! 2 is just shorthand for the Secondly, the notation Z is misleading for another reason; namely, it seems to suggest that (5.10) is k=0 an actual sum, which it is not! To remember this important caveat forever, all you have to do is remind yourself of the meaning of the symbol co : it is merely shorthand for the phrase as large as you wish. ( as we saw in Cale. i. ). Thus (5.10) is short for largeatwill sinx= Z (—1)* k=0 2k +1 X (2k+1)! ’ and, clearly, this is no sum. In fact this way of writing is meaningless! 5.10 Hr 5.H So Why do people insist in writing (5.10)? Beats me! The main reason seems to be tradition (and mental laziness). But since everybody uses it, you had better get used to it! Oh, by the way, if you want to have fim and see the disasters produced by this misinterpretation of an “infinite sum”, go and read DIVERTISSEMENT #qup. 5.18 ’24). I 0 Everything we have done to derive the Taylor series of the sine function applies equally well to the cosine and to the exponential functions. Practice the method by doing the following. A Exercise Derive the Taylor series of (a) chosx; (b) XHe". A § 5:3 Convergence of Infinite Series. Because the COKCQPE at; infinity is involved? infinite seriEs are tricky Objects lo work, wU-lt. Abel is reported )go have 533% +haic infinite series are He work 041 the devil. i In particular) we muse he Very careful in Meir manipulal-iows. two sections explore in the hexl‘A under what conditions the ordinary operations of calculus and algebra can Hence, we are going to be applied to infinite series. 0 We have already seen an example of this carefiilness when showing that the Taylor series of a function converges if the remainder Rn (x) —> 0 as n —> oo , for all x (as in the case of sin(x)) or for only an interval (as in the case of —l— ). l — x ‘ Unfortunately, except for the simple cases we have seen already, showing that Rn (x) —> 0 as n -—> 00 is difficult. Fortunately, the great mathematicians of the past " ‘ have developed many practical alternatives, and we are going to show a few of these in this section. 0 We have seen already that since Newton and Leibniz infinite series (5.“) a0+a1+a2+~-+an+---=Zak kZO have been the universal tool for all calculations. The idea is to consider the sequence {5,} of partial \ ( ~Q‘ sums Q’ 50:30, Si=ao+ah a=m+a+h, sn=ao+a1+az+' - - +ag=Zak, =0 13- 5. l5 and then show that the convergence of the infinite series (5.“ ) means that the limit of the partial sums sn as n -—+ 00 exists: (5J1) Zak=lim sn :5, k=0 11—)” Now computing the limit is quite difficult in general; so theorems have been developed to quickly check for convergence. These theorems are called Tests of Convergence, and we are going to list a few. I The Alternating Series (Leibniz) Test: Consider the infinite series (5J3) aO—al+a2—a3+---=Z(—l)kak, 1:20 and suppose that for all k ak>0, ak+1<ak, ak—>O as k—>00. Then the series (5.13) converges to some value 5, and we have the estimate (5J4) [5 -sn $8. n+l > which means that the error in the n-th partial sum is no larger that the first neglected term. I (No proof) - A Example Consider the alternating harmonic series 1 1 1 1 w k l 1-.. ___ __...= _ __ 2+3 4+5 Q 1) k+1' For all k we have 1 . 1 l . 1 . k+1>0’ k+2<k+1’ k+1—)O as k’m’ hence the series converges by the Leibniz criterion, and (5.l4) guarantees that if we truncate the series at k = n then the error made is no greater than n i 2 , which can be made very small if we make n very large. A ILL 5.14 I Remark. There is WE)? lurking around alternating series! Consider the following example: S=1——1—+%—l+—1-—l+w (converges, see EX. above) 2.1456 Now rewrite it as follows (i.e., rearranging the order of the terms), 1 1 1 1 1 1 1 1 1 l 1 S=1————+——————+——-———+-————+--- 2, 4 3 6 8 5 10 12 7 14 16 i % h it -1-1 1-1 1-1 1-1 ‘2 4+6 8+10 12+14 16+ -1 -1 1-1 1-1 1-1 ’20 2+ 4+ 6+7 8+ ) -1. -2 S Therefore, S = —21—S, or 1 = You gotta be kidding!!! [Em : The value of an “infinite sum” can depend on the order of the terms. This is another example of the fact that “infinite sums” are no sums at all, since they do not obey the lawtof addition m : Are there any infinite series in which re-arrangement of the terms can be performed safely? Yes, there are! (Dirichlet, 1837) They are those series 2%“ 1:20 such that the series made up with the absolute values of am i.e. the series W These are Called. ”absolu’r€ly convergent " series 2} an 1 converges. REG I The Ratio Test. Ifthe terms of the series 2 a1‘ satisfy the condition kzo I? 5.15 ak+l ak =L<l , lim k—No (5.15) then the series is absolutely convergent. On the other hand, if L > 1 then the series diverges. (If L = 1 then the test is inconclusive — anything can happen.) I A Example Does the series converge? We have: 1:20 ‘ l (k +1)! 1 , 1 = ——————- .= 0 . _1_ (k+1)-kl k (k+1) 1;: <1 k! Hence, the given series converges by the ratio test. A k A Example Does the series converge? 1:20 2k+l s k+l 8 8 (k?) = 2 8 -k—k=2(——k—) —+ 2>1. 2 (k +1) 2 k +1 we 1:7 Hence, the given series diverges by the ratio test. A 0 Observation. There are many criteria for testing the convergence of infinite series, But {be two above, however, are the most important ones in applications of the calculus to scienceavid, engineeri mg . From a moral-{cal pawl: 94 «rim , ims’raad, the mos‘: important one is Jclue I Comparison Test. ,den is Slam Em Mart) 19.4le (—for sexle with POWLWZ J‘eYMS ) ‘ ...
View Full Document

This note was uploaded on 04/05/2010 for the course MATH 119 taught by Professor Harmsworth during the Spring '08 term at Waterloo.

Page1 / 15

math119lecnotes-set005 - 5.1 Cha. Pier 5 Infinite Series...

This preview shows document pages 1 - 15. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online