romberg richardson

romberg richardson - CAAM 453 NUMERICAL ANALYSIS I Lecture...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: CAAM 453 NUMERICAL ANALYSIS I Lecture 24: Richardson Extrapolation and Romberg Integration Throughout numerical analysis, one encounters procedures that apply some simple approximation (e.g., linear interpolation) to construct some equally simple algorithm (e.g., the trapezoid rule). An unfortunate consequence is that such approximations often converge slowly, with errors decaying only like h or h 2 , where h is some discretization parameter (e.g., the spacing between interpolation points). In this lecture we describe a remarkable, fundamental tool of classical numerical analysis. Like alchemists who sought to convert lead into gold, so we will take a sequence of slowly convergent data and extract from it a highly accurate estimate of our solution. This procedure is Richardson extrapolation , an essential but easily overlooked technique that should be part of every numerical analysts toolbox. When applied to quadrature rules, the procedure is called Romberg integration . 4.3. Richardson extrapolation . We begin in a general setting: Suppose we wish to compute some abstract quantity, x * , which could be an integral, a derivative, the solution to a differential equation at a certain point, or something else entirely. Further suppose we cannot compute x * exactly; we can only access numerical approx- imations to it, generated by some function that depends upon a mesh parameter h . We compute ( h ) for several values of h , expecting that ( h ) (0) = x * as h 0. To obtain good accuracy, one naturally seeks to evaluate with increasingly smaller values of h . There are two reasons not to do this: (1) often becomes increasingly expensive to evaluate as h shrinks; ; (2) the numerical accuracy with which we can evaluate may deteriorate as h gets small, due to rouding errors in floating point arithmetic. (For an example of the latter, try computing estimates of f ( ) using the formula f ( ) ( f ( + h )- f ( )) /h as h 0.) Assume that is infinitely continuously differentiable as a function of h , thus allowing us to expand ( h ) in the Taylor series ( h ) = (0) + h (0) + 1 2 h 2 00 (0) + 1 6 h 3 000 (0) + . The derivatives here may seem to complicate matters (e.g., what are the derivatives of a quadrature rule with respect to h ?), but we shall not need to compute them: they key is that the function behaves smoothly in h . Recalling that (0) = x * , we can rewrite the Taylor series for ( h ) as ( h ) = x * + c 1 h + c 2 h 2 + c 3 h 3 + for some constants { c j } j =1 . This expansion implies that taking ( h ) as an approximation for x * incurs an O ( h ) error. Halving the parameter h should roughly halve the error, according to the expansion ( h/ 2) = x * + c 1 1 2 h + c 2 1 4 h 2 + c 3 1 8 h 3 + ....
View Full Document

Page1 / 6

romberg richardson - CAAM 453 NUMERICAL ANALYSIS I Lecture...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online