Unformatted text preview: Recursion: Binary Search Computing derivatives of functions Theory Central Difference Forward & Backward Difference HW#3 due today – look for solutions this afternoon. Need to begin thinking about a midterm project. Binary searches are an efficient way to search a large sorted list of objects. Given a list S, search for a number N in the list. Start with the midpoint in the list S and see if that is the number. If not… if N is lower than the midpoint, repeat the search with the lower half of the list. If N is higher than the midpoint, repeat the search in the upper half of the list. Repeat until the number is found and return its location or return 1 if it is not found. def BinSearch(S,N,Low,High): if Low > High: print 'The number %i is not in the list.'%N return 1 Mid = int((High + Low)/2.0) if N == S[Mid]: print 'The number %i was found at position %i in the list ' % (N,M) return Mid elif N < S[Mid]: BinSearch(S,N,Low,Mid1) else: BinSearch(S,N,Mid+1,High) import random S=[random.randint(0,100) for i in range(1000) ] S.sort() Remember a derivative of a function f(x) at a point (x0) is defined as the slope of the tangent line to the function at that point. df = lim dx ∆x→0 ∆f ∆x A measure of the rate at which the function is changing. Common (exact) derivatives of functions: Theory behind numerical approximation of functions is based on the Taylor series expansion. For a small value of h, f(x+h) and f (xh) can be approximated by the series: h2 h3 f (x0 + h) f (x0 ) + hf (x0 ) + f (x0 ) + f (x0 ) + ... 2! 3! h2 h3 f (x0 − h) f (x0 ) − hf (x0 ) + f (x0 ) − f (x0 ) + ... 2! 3! Now subtract the second series from the first:
h3 f (x0 + h) − f (x0 − h) 2hf (x0 ) + f (x0 ) 2hf (x0 ) + O(h3 ) 3
Now rearrange to solve for f ’(x):
f (x0 + h) − f (x0 − h) f (x0 ) = − O(h2 ) 2h Computers are not good at calculating INFINITE series (who is?). So we need to cut the series off at some point. If h is small, then h2 is VERY small. So cut off the series after the first term to get an approximate solution for derivative. So the error in our estimation of f ’(x) increases proportional to h2. def cda(f,x,h): return ( f(x+h) ‒ f(xh) ) / (2*h) Cost: Here error is of order h (not h2), so approximation is not as good. Benefit: self starting (don’t need values before AND after the point at which you are interested). Forward Difference Approximation Backward Difference Approximation f (x0 + h) − f (x0 ) f (x0 ) = − O(h) h
f (x0 ) − f (x0 − h) f (x0 ) = + O(h) h Second derivative is a derivative of the derivative of a function. It is a measure of the CURVATURE (rate of change of the rate of change) of the function at that point. Second order derivatives can be found by ADDING the two Taylor expansions and rearranging for f ’’(x): f (x0 − h) − 2f (x0 ) + f (x0 + h) f (x0 ) + O(h2 ) h2
Theory suggests that the smaller h is, the better the approximation. In practice however there is a limit to how small an h you can choose because computers can only evaluate numbers to a limited precision. Problem comes when there is no numerical difference between x and x+h. How small is too small? Depends on the computer system. For a 32 bit system (like most desktops), it is around 1023 – known as single precision. Smaller for a 64 bit system (1052) – known as double precision. Write a program to measure the error between derivative computed by CDA and FDA methods and exact value for decreasing value of h. Try f(x) = cos(x) with f ’(x) evaluated at x=0.5. How does the error trend with h? (make plots of err vs h) How small does h have to be before the error starts to increase? ...
View
Full Document
 Spring '09
 Gladden
 Derivative, Taylor Series, Mathematical analysis,

Click to edit the document details