Unformatted text preview: ‘Quality of Fit’ for non-linear regression General thoughts Generating estimates from curve_fit output Introduction to ordinary differential equations Euler method Euler-Cromer method Midterm ‘proposals’ due today! Get HW#6 off website Reading for Differential Equations in Appendix B. There are several approaches, some easier to use and some are more robust. curve_fit in scipy.optimize use: fit=curve_fit (funct,xdata,ydata,p0=params0) comments: not quite as convenient, but more robust – almost always will converge. returns: tuple of fitted parameters, variance-covariance (VC) matrix curve_fit() is a ‘wrapper’ function for scipy.optimize.functions.leastsq. Using leastsq() directly will provide MUCH more detail about the statistics of the fit. However, we can compute a ROUGH estimate of the quality of the fit from the curve_fit() output… sin(ω t + φ) Model is a damped sine function: y = Ae Four free parameters – need at least 12 data points to fit.
−τ t [yi − F (xi ; params)] i=1 SSR The, r2 can be computed by: 2 r = 1− SST
2 Linear fit has a correlation coefficient (r2) Can compute a similar quantity with nonlinear fits as a ratio of the sum of squares of residuals (SSR) to total sum of squares (SST) : 2 SST = [yi − y ] ¯ S SR = R = N i 2 So how accurate are the fitted parameters? That is a complicated question. In an ideal world, you would run lots of fits, adjusting the data within error bars, and compute a standard deviation of the variance in the resulting parameters for each fit. A simpler (and less accurate) method is to multiply the diagonal elements of the variance-covariance (VC) matrix by the square root of the reduced sum of squares (or reduced chi-square). VC matrix returned by curve_fit is a nxn matrix for n free parameters. Diagonals give variance of each parameter, and off diagonals give covariance between variables – ‘How much does a change ‘A’ effect the final value of B?’) If I have three free parameters (a,b,c): ⎛ aa ab ac ⎞ cov = ⎜ ab bb ab ⎟ ⎜ ⎟ ⎝ ac bc cc ⎠ For N data points and m parameters, the reduced SSR is SSR χ= N −m
2 Then an approximate error of the fitted parameters are: δ a = aa χ2 ; δ b = bb χ2 ; δ c = cc χ2 Often in science, we are interested in how changes in one parameter depend on other parameters in the system. Example: How a wolf population changes with time depends on how many rabbits (food) there are. Ordinary differential equations express this dependence mathematically. ‘Ordinary’ (as opposed to ‘partial’) because there is only 1 independent variable. du General form: dx
= F (x) A ‘solution’ is then determining u as a function of x: u(x). Say we have the ODE: We can rearrange and solve by integrating. Constants of integration can be found by boundary (or initial) conditions. Final solution: It satisfies the initial condition and original differential equation. 14 u(t) = u(0) + t 4 14 u(t) = 1 + t 4 du = t3 and u(0) = 1 dt du = t τ dτ 3 0 So we can use our numerical integral solution methods to solve ODEs. Use Trapezoid rule to solve integral (time step = h):
t 0 h f (τ )dτ 2 f (τi ) + f (τi+1 ) 2 i=1 Similarly, we can use our various integration schemes to solve ODEs to varying level of accuracies. Another possibility is how a variable changes depends on the variable itself. Here we can take derivatives and rearrange ODE to solve for the next time step based on the solution at the previous time. n−1 Consider general ODE with initial condition: We are interested in a solution from t=0 to t=T in n time steps: du = f (u(t)) and u(0) = u0 dt T ∆t = n Notation for solution at time step k: Using finite difference, ODE becomes Finally solution for time k+1 is uk+1 − uk = f (uk ) ∆t uk+1 = uk + ∆tf (uk ) uk ≡ u(tk ) Need value of u at k=0 time step to get this started (initial condition). In many systems, the rate at which a quantity changes depends linearly on the value of that quantity. du (chemical reactions, nuclear fission, = −bu dt heat transfer,…) Initial condition (t=0): u(0) = A
Solution is an exponential: u(t) = Ae −bt ...
View Full Document