LeastSquaresFitting - Least Squares Fitting of Non-Linear...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
Least Squares Fitting of Non-Linear Functions. Least squares fitting is a method that fits parameters of a function such that the function provides a best fit to a set of data points. The simplest example is, of course, linear regression where we fit a function Y x ( ) = a + b " x (1) We have data points as value pairs x i , U i . To fit the parameters a and b we find those which minimize the sum of the squared deviations S = y i " Y a , b , x i ( ) ( ) 2 i = 1 n # (2) between the measured values y i and the calculated values Y ( a,b,x i ). Note that Y is now also considered a function of the parameters a and b because in the fit they are varied to minimize S . The minimum is found by finding the point (a,b) where the derivatives of S with respect to a and b are zero: 1 2 " a y i # Y a , b , x i ( ) ( ) 2 i = 1 n $ % ( ) * = 0, 1 2 b y i # Y a , b , x i ( ) ( ) 2 i = 1 n $ % ( ) * = 0 y i # Y a , b , x i ( ) ( ) i = 1 n $ Y a , b , x i ( ) a = 0, y i # Y a , b , x i ( ) ( ) i = 1 n $ Y a , b , x i ( ) b = 0 y i # a + b + x i ( ) ( ) i = 1 n $ = 0, y i # a + b + x i ( ) ( ) i = 1 n $ # x i ( ) = 0 y i # n + a # b + x i i = 1 n $ i = 1 n $ = 0, y i + x i # a x i i = 1 n $ # b + x i 2 i = 1 n $ i = 1 n $ = 0 We solve both equations for a : a = 1 n y i " b # x i i = 1 n $ i = 1 n $ % ( ) * , a = y i # x i " b # x i 2 i = 1 n $ i = 1 n $ x i i = 1 n $ (3) giving
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
1 n y i " b # x i i = 1 n $ i = 1 n $ % ( ) * = y i # x i " b # x i 2 i = 1 n $ i = 1 n $ x i i = 1 n $ b # x i 2 i = 1 n $ x i i = 1 n $ " 1 n x i i = 1 n $ % ( ) * * * * = y i # x i i = 1 n $ x i i = 1 n $ " 1 n y i i = 1 n $ b # x i 2 " 1 n x i i = 1 n $ % ( ) * i = 1 n $ 2 x i i = 1 n $ % ( ) * * * * * = y i # x i i = 1 n $ " 1 n x i i = 1 n $ y i i = 1 n $ x i i = 1 n $ b = y i # x i i = 1 n $ " 1 n x i i = 1 n $ y i i = 1 n $ x i i = 1 n $ # x i i = 1 n $ x i 2 " 1 n x i i = 1 n $ % ( ) * i = 1 n $ 2 b = y i # x i i = 1 n $ " 1 n x i i = 1 n $ y i i = 1 n $ x i 2 " 1 n x i i = 1 n $ % ( ) * i = 1 n $ 2 So with this and equation (3) we have our equations to calculate the values a and b , which give the best fit for our data: b = y i " x i i = 1 n # $ 1 n x i i = 1 n # y i i = 1 n # x i 2 $ 1 n x i i = 1 n # % ( ) * i = 1 n # 2 , a = 1 n y i $ b " x i i = 1 n # i = 1 n # % ( ) * (4) But how accurate are the values, what are their “error bars”? A value can only be compared with other values if we know its precision. Only then can we accept or reject a hypothesis.
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

This note was uploaded on 03/29/2009 for the course A&EP 470 taught by Professor Lindau during the Fall '08 term at Cornell University (Engineering School).

Page1 / 10

LeastSquaresFitting - Least Squares Fitting of Non-Linear...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online