lecture27

lecture27 - Lecture 27 Numerical Differentiation...

Info iconThis preview shows pages 1–2. Sign up to view the full content.

View Full Document Right Arrow Icon

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture 27 Numerical Differentiation Approximating derivatives from data Suppose that a variable y depends on another variable x , i.e. y = f ( x ), but we only know the values of f at a finite set of points, e.g., as data from an experiment or a simulation: ( x 1 , y 1 ) , ( x 2 , y 2 ) , . . . , ( x n , y n ) . Suppose then that we need information about the derivative of f ( x ). One obvious idea would be to approximate f ( x i ) by the Forward Difference : f ( x i ) = y i y i +1- y i x i +1- x i . This formula follows directly from the definition of the derivative in calculus. An alternative would be to use a Backward Difference : f ( x i ) y i- y i 1 x i- x i 1 . Since the errors for the forward difference and backward difference tend to have opposite signs, it would seem likely that averaging the two methods would give a better result than either alone. If the points are evenly spaced, i.e. x i +1- x i = x i- x i 1 = h , then averaging the forward and backward differences leads to a symmetric expression called the Central Difference : f ( x i ) = y i y i +1- y i 1 2 h . Errors of approximation We can use Taylor polynomials to derive the accuracy of the forward, backward and central difference formulas. For example the usual form of the Taylor polynomial with remainder (sometimes calledformulas....
View Full Document

Page1 / 3

lecture27 - Lecture 27 Numerical Differentiation...

This preview shows document pages 1 - 2. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online