This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Computing derivatives of functions Forward & Backward Difference Higher order derivatives Derivatives on a mesh Richardson extrapolation to improve accuracy HW#4 due Thurs –make sure to get addendum (#3) on web. You should be reading in Appendix A.1 Cost: Here error is of order h (not h2), so approximation is not as good. Benefit: self starting (don’t need values before AND after the point at which you are interested). Forward Difference Approximation Backward Difference Approximation f (x0 + h) − f (x0 ) f (x0 ) = − O(h) h
f (x0 ) − f (x0 − h) f (x0 ) = + O(h) h Second derivative is a derivative of the derivative of a function. It is a measure of the CURVATURE (rate of change of the rate of change) of the function at that point. Second order derivatives can be found by ADDING the two Taylor expansions and rearranging for f ’’(x): f (x0 − h) − 2f (x0 ) + f (x0 + h) f (x0 ) + O(h2 ) h2
Theory suggests that the smaller h is, the better the approximation. In practice however there is a limit to how small an h you can choose because computers can only evaluate numbers to a limited precision. Problem comes when there is no numerical difference between x and x+h. How small is too small? Depends on the computer system. For a 32 bit system (like most desktops), it is around 1023 – known as single precision. Smaller for a 64 bit system (1052) – known as double precision. Generally we’re not interested in the derivative of a function at a single point, but over a range. Case 1: We know an analytical form for the function Then just call the cda, fda, bda functions with x being an array rather than a single value. Functions will return arrays of the derivatives with no change. Here h value is arbitrary since we know the function, we can evaluate it for any value of x. Case 2: We only have a collection of (x,y) points with no known analytical function to relate them. Maybe simulation generated or experimental data,… On interior points (not first or last), we can use the central difference. First point: use forward difference Last point: use backward difference Here h is NOT arbitrary, but defined by the spacing of the points (e.g. h = x[1] – x[0]) Other options: (1) fit data to some function and take derivative of that function (exact or approximate) (2) Interpolate data with a smooth curve (cubic spline) General method to improve accuracy of estimated quantities Let: g be computed value, G be exact value, and E be error So, G = g+E If E depends on a quantity (say like grid spacing h) in a power law fashion: E (h) = chp p Then for different values of h: G(h1 ) = g (h1 ) + ch 1 p G(h2 ) = g (h2 ) + ch2 Now eliminate c to get Richardson Extrapolation Equation: G= h1 p ( h2 ) g (h2 ) − g (h1 ) ( h1 )p − 1 h2 Apply to central difference approximation. Use Then Central Diff is:
h2 = h1 2 So here, p=2. Need to call CDA twice (once with h and again with h/2) After Richardson is applied, new error is of order: Cost is g(h) must be evaluated twice. f (x + h) − f (x − h) + O(h2 ) f (x) = 2h
G g (h) E (h) 2p g (h1 /2) − g (h1 ) G= 2p − 1 O(h4 ) ...
View Full
Document
 Spring '09
 Gladden

Click to edit the document details