Chapter14

# Chapter14 - We can fit the stress strain data with a...

This preview shows page 1. Sign up to view the full content.

This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: We can fit the stress strain data with a quadratic polynomial instead of a linear one. Let us compare the two. Here is the data again >> strain=[0:1:5] strain = 0 1 2 3 4 5 >> stress=[0:10:50] stress = 0 10 20 30 40 50 >> randn('state',0) >> error_m=randn(1,6) error_m = 0.4326 1.6656 0.1253 0.2877 1.1465 1.1909 >> stress_m=stress+error_m stress_m = 0.4326 8.3344 20.1253 30.2877 38.8535 51.1909 Now the two fits. >> [p s]=polyfit(strain,stress_m,1) p = 10.2811 0.9761 s = R: [2x2 double] df: 4 normr: 1.9903 >> [p2 s2]=polyfit(strain,stress_m,2) p2 = 0.0884 9.8389 0.6813 s2 = R: [3x3 double] df: 3 normr: 1.9156 We see that the norm of the residuals hardly changed by going to a quadratic function, which tells us that we did not gain much. We compare standard errors \std_err1 = 0.9951 >> std_err2=1.9156/sqrt(3) std_err2 = 1.1060 The fact that the standard error increased warns us of the danger of using a quadratic. This manifests itself mostly in extrapolation. So let us plot the two fits over a larger range of the strain. >> strain_w=[0:1:20]; >> stress_fit1=polyval(p,strain_w); >> stress_fit2=polyval(p2,strain_w); >> plot(strain_w,stress_fit1,strain_w,stress_fit2) >> hold on >> plot(strain,stress_m,'o') 250 200 150 100 50 0 -50 0 2 4 6 8 10 12 14 16 18 20 Remember that the true Young modulus is 10. With the linear fit, for a strain of 20 we get a stress value very close to 200, but with the quadratic fit is around 230 >> stress_fit1(21), stress_fit2(21) ans = 204.6451 ans = 231.4672 We first take the log in order to create a linear model log Q = log 0 + 1 log D + 2 log S We enter the data >> D = [.3 .6 .9 .3 .6 .9 .3 .6 .9]'; >> S = [.001 .001 .001 .01 .01 .01 .05 .05 .05]'; >> Q = [.04 .24 .69 .13 .82 2.38 .31 1.95 5.66]'; Add a column of ones >> o = [1 1 1 1 1 1 1 1 1]'; Create the Z matrix >> Z = [o log10(D) log10(S)] Z = 1.0000 0.5229 3.0000 1.0000 0.2218 3.0000 1.0000 0.0458 3.0000 1.0000 0.5229 2.0000 1.0000 0.2218 2.0000 1.0000 0.0458 2.0000 1.0000 0.5229 1.3010 1.0000 0.2218 1.3010 1.0000 0.0458 1.3010 Finally solve for the coefficients >> a = (Z'*Z)\[Z'*log10(Q)] a = 1.5609 2.6279 0.5320 Thus the solution is Log(Q)= 1.5609+2.6279log(D)+0.532log(S) Or Q=101.5609D2.6279S0.532 We could have obtained the results also with Matlab regress. Here is an extract from help regress REGRESS Multiple linear regression using least squares. B = REGRESS(Y,X) returns the vector B of regression coefficients in the linear model Y = X*B. X is an nbyp design matrix, with rows corresponding to observations and columns to predictor variables. Y is an nby1 vector of response observations. [B,BINT] = REGRESS(Y,X) returns a matrix BINT of 95% confidence intervals for B. [B,BINT,R] = REGRESS(Y,X) returns a vector R of residuals. [B,BINT,R,RINT] = REGRESS(Y,X) returns a matrix RINT of intervals that can be used to diagnose outliers. If RINT(i,:) does not contain zero, then the ith residual is larger than would be expected, at the 5% significance level. This is evidence that the Ith observation is an outlier. [B,BINT,R,RINT,STATS] = REGRESS(Y,X) returns a vector STATS containing the Rsquare statistic, the F statistic and p value for the full model, and an estimate of the error variance. >> [B,BINT,R,RINT,STATS]=regress(log10(Q),Z) B = 1.5609 2.6279 0.5320 BINT = 1.5414 1.5804 2.5992 2.6567 0.5239 0.5401 R = 0.0112 0.0017 0.0058 0.0089 0.0001 0.0001 0.0033 0.0043 0.0043 RINT = 0.0061 0.0163 0.0172 0.0138 0.0183 0.0067 0.0210 0.0032 0.0176 0.0174 0.0162 0.0161 0.0166 0.0100 0.0110 0.0196 0.0094 0.0180 STATS = 1.0e+004 * 0.0001 3.7818 0.0000 0.0000 >> format short e >> STATS STATS = 9.9992e001 3.7818e+004 4.9905e013 4.8261e005 The statistics with Rsquare of 0.99992 and a standard error of 4.8e5 look wonderful, but we have to remember that these are errors related to the log. It is a good idea to calculate the errors in Q >> Qcal=10.^(B(1))*D.^B(2).*S.^B(3) Qcal = 3.8978e002 2.4094e001 6.9931e001 1.3268e001 8.2016e001 2.3804e+000 3.1236e001 1.9308e+000 5.6040e+000 >> QcalQ ans = 1.0216e003 9.4161e004 9.3081e003 2.6818e003 1.6122e004 4.3297e004 2.3594e003 1.9177e002 5.5986e002 We see that the fitting errors are of the order of 1%. Nonlinear models can some time be linearized by a transformation, but can also be solved directly by minimizing the sum of the squares of the error using optimization, as in fminsearch. As an example, let us do Problem 14.13. We first key in the data >>S = [.01 .05 .1 .5 1 5 10 50 100]; >> v0 = [6.078e11 7.595e9 6.063e8 5.788e6 1.737e5 2.423e5 2.43e5 2.431e5 2.431e5]; We can transform it to a linear regression problem as 1 K 1 1 = + 3 v0 km S km We can now do a linear regression >> ones3=1./S.^3; >> onev0=1./v0; >> [p stat]=polyfit(ones3,onev0,1) p = 1.6453e+004 4.1400e+004 stat = R: [2x2 double] df: 7 normr: 2.4371e+003 >> km=1/p(2) km = 2.4155e005 >> K=km*p(1) K = 3.9741e001 So that the fit is v0 = 2.4155 10-5 S 3 0.39741 + S 3 This fit, however, does not minimize the sum of the squares of the error in the original data. To do that, we use optimization. We can create a function that calculates the sum of the squares in an Mfile function f = fSSR(a,Sm,v0m) v0p = a(1)*Sm.^3./(a(2)+Sm.^3); f = sum((v0mv0p).^2); Or we can do that with an anonymous function. Normerr=@(k) norm(k(1)*S.^3./(k(2)+S.^3)v0) >> kmin=fminsearch(normerr,[2e5 1]) kmin = 2.4310e005 3.9976e001 So that we get slightly different coefficients. >> normerr(kmin) ans = 4.6197e009 >> ktran=[km K] ktran = 2.4155e005 3.9741e001 >> normerr(ktran) ans = 3.2015e007 So there is substantial difference. ...
View Full Document

## This note was uploaded on 03/27/2012 for the course EGM 3344 taught by Professor Raphaelhaftka during the Spring '09 term at University of Florida.

Ask a homework question - tutors are online