Introduction subtracting 127 from 126 we get c 2 h 2

This preview shows page 8 - 11 out of 14 pages.

CHAPTER 1.INTRODUCTIONSubtracting (1.27) from (1.26) we getC2h2=43(Th/2[f]-Th[f])+43(R(h/2)-R(h)).(1.28)The last term on the right hand side iso(h2). Hence, forhsufficiently small,we haveC2h243(Th/2[f]-Th[f])(1.29)and this could provide a good, computable estimate for the error, i.e.Eh[f]43(Th/2[f]-Th[f]).(1.30)The key here is thathhas to be sufficiently small to make the asymptoticapproximation (1.29) valid. We can check this by working backwards. Ifhis sufficiently small, then evaluating (1.29) ath/2 we getC2h2243(Th/4[f]-Th/2[f])(1.31)and consequently the ratioq(h) =Th/2[f]-Th[f]Th/4[f]-Th/2[f](1.32)should be approximately 4. Thus,q(h) oers a reliable, computable indicatorof whether or nothis sufficiently small for (1.30) to be an accurate estimateof the error.1.2.5Error Correction and Richardson ExtrapolationIfhis sufficiently small, as explained above, we can use (1.30) to improvethe accuracy ofTh[f] with the following approximation2Sh[f] :=Th[f] +43(Th/2[f]-Th[f]).(1.33)2The symbol := means equal by definition.
1.2.AN ILLUSTRATIVE EXAMPLE11We can view thiserror correctionprocedure as a way to eliminate theleading order (inh) contribution to the error. Multiplying (1.27) by 4 andsubstracting (1.26) to the result we getI[f] =4Th/2[f]-Th[f]3+4R(h/2)-R(h)3(1.34)Note thatSh[f] is exactly the first term in the right hand side of (1.34) andthat the last term converges to zero faster thanh2.This very useful andgeneral procedure in which the leading order component of the asymptoticform of error is eliminated by a combination of two computations performedwith two dierent values ofhis calledRichardson’s Extrapolation.Example 2.Consider againf(x) =exin[0,1]. Withh= 1/16we getq116=T1/32[ex]-T1/16[ex]T1/64[ex]-T1/32[ex]3.9998(1.35)and the improved approximation isS1/16[ex] =T1/16[ex] +43(T1/32[ex]-T1/16[ex])= 1.718281837561771(1.36)which gives us nearly 8 digits of accuracy (error9.110-9).S1/32givesus an error5.710-10. It decreased by approximately a factor of1/16.This would correspond tofourth orderrate of convergence. We will see inChapter 8 that indeed this is the case.It appears thatSh[f] gives us superior accuracy to that ofTh[f] but atroughly twice the computational cost.If we group together the commonterms inTh[f] andTh/2[f] we can computeSh[f] at about the same compu-tational cost as that ofTh/2[f]:4Th/2[f]-Th[f] = 4h2"12f(a) +2N-1Xj=1f(a+jh/2) +12f(b)#-h"12f(a) +N-1Xj=1f(a+jh) +12f(b)#=h2"f(a) +f(b) + 2N-1Xk=1f(a+kh) + 4N-1Xk=1f(a+kh/2)#.
12CHAPTER 1.INTRODUCTIONThereforeSh[f] =h6"f(a) + 2N-1Xk=1f(a+kh) + 4N-1Xk=1f(a+kh/2) +f(b)#.(1.37)The resulting quadrature formulaSh[f] is known as theComposite Simpson’sRuleand, as we will see in Chapter 8, can be derived by approximating theintegrand by quadratic polynomials. Thus, based on cost and accuracy, theComposite Simpson’s Rule would be preferable to the Composite TrapezoidalRule, with one important exception: periodic smooth integrands integratedover their period.Example 3.Consider the integralI[1/(2 + sinx)] =Z20dx2 + sinx.(1.38)

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture