{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

hw6_sol - 13.5 The results can be summarized as y versus x...

Info iconThis preview shows pages 1–14. Sign up to view the full content.

View Full Document Right Arrow Icon
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Background image of page 2
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Background image of page 4
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Background image of page 6
Background image of page 7

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Background image of page 8
Background image of page 9

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Background image of page 10
Background image of page 11

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Background image of page 12
Background image of page 13

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full Document Right Arrow Icon
Background image of page 14
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: 13.5 The results can be summarized as y versus x 3: versus 3; Best fit equation y = 4.351535 + 0.352473: )5 = —9.96T63 + 2.374101 Standard error 1.0650] 2. 754025 Correlation coefficient 0.914%? 0.91476? We can also plot both lines on the same graph y12 Thus, the “best” fit lines and the standard errors differ. This makes sense because different errors are being minimized depending on our choice of the dependent (ordinate) and independent (abscissa) variables. In contrast, the correlation coefficients are identical since the same amount of uncertainty is explained regardless of how the points are plotted. 13.7 The function can be linearized by dividing it by J: and taking the natural logarithm to yield ln(y.-" x) = lnrz4 + ,84x Therefore, if the model holds, a plot of ln(y.-'x) versus I should yield a straight line with an intercept of 111.124 and a slope of '34. X y InUn'x) 0.1 05 2.014003 0.2 1.25 1832581 0.4 1.45 1.2870354 0.8 1.25 0133069 0.9 0.85 —0.05?18 1.3 0.55 08802 1.5 0.35 —1 .45529 1.? 0.28 —1 80359 1.8 0.18 230259 y = 24233:: + 2.2532 R2= 0.9974 disdain-thaw Therefore, ,21 = 4.4733 and m = 32-2532 = 9.661135. and the fit is y = assuage—“7331 This equation can be plotted together with the data: 13.11 The power fit can be determined as WEKQ} 30 . 1.345033 0.322213 75 2.12 1.375031 0.323333 7? 2.15 1.333431 0.332433 30 2.2 1.30303 0.342423 321.313314 0.343353 341324233 0.343305 3? 1.333513 0.354103 301.354243 0.331323 0.32 I] 35 logA = 3323313913 -0.3321 ' R2=0.9T11 0.35 0.34 0.33 0.32 ' 0.31 1.3 Therefore, the power is b = 0.3299 and the lead coefficient is a = 10—03321 = 0.4149. and the fit is A = 0.4143W"-3"'99 Here is a plot of the fit along with the original data: 2.35 2.3 2.25 2.2 2.15 2.1 2.05 . 1 .34 1.58 1 .92 1 .96 T0 The value of the surface area for a 95-kg person can be estimated as T5 30 A = 0.4143(35)“3m = 2.34 m2 35 90 13.18 A log-leg plot of pversua Tauggeata a linear relatienahip. 1|] 1 .11 1 “:11 11mm 11.01 I 11.11111 . 11mm We regress. Ingmar versus 1-3ng to give legw y=4.5814Tl—3.6133810g13 T {r2 =naasma) 4.5 81421 Therefore, m = 16 = 38,142.94 and ,6; = —3~.61338= and the power model is ,u = 33.14194T‘3-“1335 The model and the data can be plotted on lmtranafermed scales a5 2.5 2 y = 3314511341“ “5 R2= 0.915? 1 11.5 n 1'} 51:1 1 DO 1 51'} 2131131 251'} SDI] 35131 14.3 The data can be tabulated and the sums computed as :' x y ? F XI 5 1:5 xy ?y P}: 1.6 9 2T 81 243 1’29 4.8 14.4 43.2 3.6 16 64 256 11124 4096 14.4 57.6 230.4 4.4 25 125 625 3125 15625 22 111] 551] 3.4 49 343 2461 16867 117649 23.8 166.6 1166.2 2.2 64 512 4696 32768 262144 17.6 140.8 1126.4 2.8 81 7'29 6561 59649 531441 25.2 226.8 2641.2 UDU'I-D-LOM—i moo-401.500 2 11 3.3 121 1331 14541 151051 8 2 Q m 1723 20735 243332 2. 59 25.4 509 4959 49397 522999 Normal equations: 8 59 509 4859 “0 25.4 59 509 4859 4939? a] 204.8 509 4859 49397 522899 a; 1838.4 4859 4939'? 522899 5689229 03 18164 which can be solved for the coeflicients yielding the following best-fit polynomial y = —11.488'}' + 2.1438122: —1.04121x2 + 9.0465'1'5x3 Here is the resultiJJg fit: The predicted values can be used to determined the sum of the squares. Note that the mean of the y values is 3.3. Tyre-d (JG—y)? I" x y 1 3 1.6 1.83213 2.8990 2 4 3.6 3.41452 9.0900 3 5 4.4 4.034“ 1.2190 4 T 3.4 3.508?5 9.0100 5 8 2.2 2.922?1 1.2160I 6 9 2.8 2.494? 9.2590 T 11 3.8 3.23302 9.2500 8 12 4.6 4.95946 1.6990 2 T8990 1771561 2985984 (2 —J1:r41)2 0.0539 0.0344 0.1934 0.0119 0.5223 0.0932 0.3215 0.1292 1.299? 41.8 55.2 5689229 204.8 459.8 505?.8 662.4 F9488 1 81 64 1 838.4 The coefficient of determination can be computed as 2 _ 7.15 —1.2997 7.5 14.5 Because the data is curved. a linear regression will undoubtedly have too much error. Therefore. as a first try. fit a parabola, r = 0.829 }} format long >> T = [0 5 10 15 20 25 30]: >> C = [14.6 12.8 11.3 10.1 9.09 8.26 7.56]; >> p = polyfit{T,c,2) p: 0.00439523809524 —0.363357l42857l4 14.55190476190477 Thus, the best-fit parabola would be c =14.55190476 — 0.36335714T + 0100439523813.1!"2 16 12 I] 5 1|] 15 20 25 30 We can use this equation to generate predictions corresponding to the data. When these values are rounded to the same munber of significant digits the results are T c~data c-pred rounded 0 14.6 14.55190 14.6 5 12.8 12.84500 12.8 10 11.3 11.35786 11.4 15 10.1 10.09048 10.1 20 9.09 9.04286 9.04 25 8.26 8.21500 8.22 30 7.56 7.60690 7.61 Thus, although the plot looks good= discrepancies occur in the third significant digit. We can, therefore, fit a third-order polynomial }> 1::- = polyfit {T,::, 3} p : -D.00006949444444 0.007295231309524 —D.39557936507936 14.600238619523910 Thus, the best-fit cubic would be c=14.600238095 —0.395579365T +0.11072952381"2 —0'.000'(Milli-4441"?r We can use this equation to generate predictions corresponding to the data. When these values are rounded to the same munbcr of significant digits the results are T c-data c-pred rounded 0 14.6 14.60020 14.6 5 12.8 12.79663 12.8 10 11.3 11.30949 11.3 15 10.1 10.09044 10.1 20 9.09 9.09116 9.09 25 8.26 8.26331 8.26 30 7.56 7.55855 7.56 Thus, the predictions and data agree to three significant digits. 14.6 The multiple linear regression model to evaluate is 0:130 +011" +a2c The [Z] and y matrices can be set up using LIA'I'LAB commands in a fashion similar to Example 14.4, >> format long >> 1: = [0 5 10 15 20 25 30]; >> T = [t t t]"': >> C = [zeros{size{t]] 10*ones{size{t)} 20*onesisizettJJJ'; >> 2 = [ones{5ize{T)] T c]; >> y = [14.6 12.8 11.3 10.1 9.09 8.26 7.56 12.9 11.3 10.1 9.03 8.17 7.46 6.35 11.4 10.3 6.96 8.06 7.35 6.73 6.2]'; The coefficients can be evaluated as >> a = ZXy a- : 13.52214285Tl4236 —0.20123809523810 —D.1049285T14285T Thus. the best-fit multiple regression model is a = 115221423 5T14286 — (3.2{31238'09 5238 [UT — 0.1049285? 14285?c We can evaluate the prediction at T= 12 and c = 15 and evaluate the percent relative error as >> Cp 2 a{l}+a{2)*12+a{3)*15 Cp = 9.53335714285714 >> ea = abs[[9.09—cp]fl9.09]*100 ea = 4.8774163130598? Thus, the error is considerable. This can be seen even better by generating predictions for all the data and then generating a plot of the predictions versus the data. A one-to-one line is included to show how the predictions diverge from a perfect fit. 16 12 4 3 12 1E The cause for the discrepancy is because the dependence of oxygen concentration on the unknowns is significantly nonlinear. It should be noted that this is particularly the case for the dependency on temperature. 1-=I-.".-l The multiple linear regression model to evaluate is y =a. +5113!" +afi’1"2 +£5131"3 + (:40 The [2'] matrix can be set up as in >> T = 0:5:30; >} T = [T T T]'; >} o = [0 0 0 0 0 0 0 10 10 10 10 10 10 10 20 20 20 20 20 20 20]'; >> 0 = [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1]'; >} y = [14.6 12.8 11.3 10.1 9.09 8.26 7.56 12.9 11.3 10.1 9.03 8.17 T.46 6.05 11.4 10.3 0.96 8.06 3.35 6.?3 6.2]'; >15 Z = [D T T."2 T."3 G]: Then, the coefficients can be generated by solving Eq.(l4.10) >} format long >> a = {Z'*Z}\[Z'*Yl a: 14.02714285T1428T —0.33642328042328 0.005T4444444444 —0.DDDD&3TDBTD3TD —0.1049235T14285T Thus, the least-squares fit is y==l¢027143——0336423$3+0fl0574444T2-—flfl00043704T3-—0J04928576 The model can then be used to predict values of oxygen at the same values as the data. These predictions can be plotted against the data to depict the goodness of fit. }} yp = Z*a b} plotty,yp,'o'} 1B 14 12 1D 3 3 10 12 14 13 Finally, theprediction can he made at T= 12 and c =15= >> a{1]+a{2]*12+a{3]*12“2+a[4}*12‘3+a{5]*15 ans = 9.16T31492063485 which compares favorably with the true value of Qflg mg-"L. 14.12 First, an M-file function must he created to compute the sum of the squares= function f = fSSRia,xm,ym} yp = a{1]*xm.*exp{a{2}*xml; f = sum {rm-rm 3‘2}; The data can then be entered as >} x b} y [.1 .2 .4 .6 .9 1.3 1.5 1.? 1.5]; [0.25 1.25 1.45 1.25 0.35 0.55 0.35 0.28 0.13]; The minimization of the function is then implemented by >} a = fminsearchtflfSSR, [1Ir 1]Jr , x, y} 9.8545 —2.521T The beat-fit mode] is therefore y = 9.3545m‘m‘“ The fit along with the data can be displayed graphically. >} yp = a{1}*x.*exp{a121*xli hr} P191313: Yr '0' Ix: YP} 15 Q6 14.14 (a) We regress y versus :1: to give y = 29.5 + 9.4945451: The model and the data can be plotted as 511 411 . I :15 y = 5.45451: + 211.5 211 I, R2 111 - 5.5355 11 I} 11] 2!] fill] 41] El] El} (II) We regress logmy versus logmx to give logm y = l9.99795 + 9.335977 logm 1: Therefore, a} = mamas = 9.952915 and fl; = 9.385977. and the power model is y = vsssslsflmm The model and the data can be plotted as 5|} 49 I 39 20 y = 9.952936]-31151 10 a2 = 9.9553 9 I} 1!] 20 3D 4D 50 BI] (e) We regress l.-'y versus 1.:"x to give 1 = U.fl19963 + {31.1'9'3'4I54l y I Therefore, a; =1.-"0.01996322 = SflflQElE and ,3; = U.19T45357(50.09212) = 9.89131 and the saturation-growth-rate model is y = 50.059212— 98913? + x The model and the data can be plotted as 59 49 X 3“ y = 59.992— 20 x + 9.991999 19 a2 = 9.99919 9 D 1!] 20 30 4|} 5!] El} ((1) We employ,F poljmomial regression to fit a parabola y = —U.G16flfix2 +1.3TT8TQI +11.T6667 The model and the data can be plotted as 5|] 40 an 20 y =-fl.fl161x2+1.3}'?9x+11.?6? 1a a2 = 0.93 n {I 10 21] 3|] 4!] 5|] 51] Comparison of fits: The linear fit is obviously inadequate. Although the power fit follows the general trend of the data, it is also inadequate because (1) the residuals do not appear to be randomly distributed around the best fit line and [2] it has a lower r2 than the saturation and parabolic models. The best fits are for the satLu'ation-grou-‘th-rate and the parabolic models. The}r both have randomly distributed residuals and the},' have similar high coefficients of determination. The saturation model has a slif-jwhtl}.r higher r2. Although the diEerence is probably not statistically significant, in the absence of additional information, we can conclude that the saturation model represents the best fit. ...
View Full Document

{[ snackBarMessage ]}