poe4formulas

poe4formulas - Expectations, Variances & Covariances...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Expectations, Variances & Covariances The Rules of Summation n å xi ¼ x1 þ x2 þ Á Á Á þ xn covðX ; Y Þ ¼ E½ðX ÀE½X ŠÞðY ÀE½Y ŠÞŠ i¼1 n ¼ å å ½x À EðX ފ½ y À EðY ފ f ðx; yÞ å a ¼ na xy i¼1 n covðX ;Y Þ r ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi varðX ÞvarðY Þ n å axi ¼ a å xi i¼1 n i ¼1 n n i¼1 i ¼1 E(c1X þ c2Y ) ¼ c1E(X ) þ c2E(Y ) E(X þ Y ) ¼ E(X ) þ E(Y ) å ðxi þ yi Þ ¼ å xi þ å yi i¼1 n n n i¼1 i¼1 å ðaxi þ byi Þ ¼ a å xi þ b å yi i¼1 n var(aX þ bY þ cZ ) ¼ a2var(X ) þ b2var(Y ) þ c2var(Z ) þ 2abcov(X,Y ) þ 2accov(X,Z ) þ 2bccov(Y,Z ) n å ða þ bxi Þ ¼ na þ b å xi i¼1 If X, Y, and Z are independent, or uncorrelated, random variables, then the covariance terms are zero and: i¼1 n å xi x1 þ x2 þ Á Á Á þ xn x ¼ i¼1n ¼ n varðaX þ bY þ cZ Þ ¼ a2 varðX Þ n å ðxi À xÞ ¼ 0 þ b2 varðY Þ þ c2 varðZ Þ i¼1 2 3 2 å å f ðxi ; yj Þ ¼ å ½ f ðxi ; y1 Þ þ f ðxi ; y2 Þ þ f ðxi ; y3 ފ i¼1 j¼1 i ¼1 ¼ f ðx1 ; y1 Þ þ f ðx1 ; y2 Þ þ f ðx1 ; y3 Þ þ f ðx2 ; y1 Þ þ f ðx2 ; y2 Þ þ f ðx2 ; y3 Þ Expected Values & Variances EðX Þ ¼ x1 f ðx1 Þ þ x2 f ðx2 Þ þ Á Á Á þ xn f ðxn Þ n ¼ å xi f ðxi Þ ¼ å x f ðxÞ i ¼1 x E½gðX ފ ¼ å gðxÞ f ðxÞ x E½g1 ðX Þ þ g2 ðX ފ ¼ å ½g1ðxÞ þ g2 ðxފ f ðxÞ x ¼ å g1ðxÞ f ðxÞ þ å g2 ðxÞ f ðxÞ x Normal Probabilities XÀm $ N ð0; 1Þ s 2 If X $ N(m, s ) and a is a constant, then  a À m PðX ! aÞ ¼ P Z ! s If X $ N ðm; s2 Þ and a and b are constants; then   aÀm bÀm Z Pða X bÞ ¼ P s s If X $ N(m, s2), then Z ¼ Assumptions of the Simple Linear Regression Model SR1 x ¼ E½g1 ðX ފ þ E½g2 ðX ފ E(c) ¼ c E(cX ) ¼ cE(X ) E(a þ cX ) ¼ a þ cE(X ) var(X ) ¼ s2 ¼ E[X À E(X )]2 ¼ E(X2) À [E(X )]2 var(a þ cX ) ¼ E [(a þ cX ) À E(a þ cX )]2 ¼ c2var(X ) Marginal and Conditional Distributions f ðxÞ ¼ å f ðx; yÞ for each value X can take f ðyÞ ¼ å f ðx; yÞ for each value Y can take SR2 SR3 SR4 SR5 SR6 The value of y, for each value of x, is y ¼ b1 þ b2x þ e The average value of the random error e is E(e) ¼ 0 since we assume that E( y) ¼ b1 þ b2x The variance of the random error e is var(e) ¼ s2 ¼ var( y) The covariance between any pair of random errors, ei and ej is cov(ei, ej) ¼ cov( yi, yj) ¼ 0 The variable x is not random and must take at least two different values. (optional ) The values of e are normally distributed about their mean e $ N(0, s2) y x f ðxjyÞ ¼ P½X ¼ xjY ¼ yŠ ¼ f ðx; yÞ f ðyÞ If X and Y are independent random variables, then f (x,y) ¼ f (x)f ( y) for each and every pair of values x and y. The converse is also true. If X and Y are independent random variables, then the conditional probability density function of X given that Y ¼ y is f ðxjyÞ ¼ f ðx; yÞ f ðxÞ f ðyÞ ¼ ¼ f ðxÞ f ðyÞ f ðyÞ for each and every pair of values x and y. The converse is also true. Least Squares Estimation If b1 and b2 are the least squares estimates, then ^i ¼ b1 þ b2 xi y ^i ¼ yi À ^i ¼ yi À b1 À b2 xi e y The Normal Equations Nb1 þ Sxi b2 ¼ Syi Sxi b1 þ Sx2 b2 ¼ Sxi yi i Least Squares Estimators b2 ¼ Sðxi À xÞðyi À yÞ S ðxi À xÞ2 b1 ¼ y À b2 x Elasticity percentage change in y Dy=y Dy x ¼ ¼ Á h¼ percentage change in x Dx=x Dx y h¼ DEðyÞ=EðyÞ DEðyÞ x x ¼ Á ¼ b2 Á Dx=x Dx EðyÞ EðyÞ Least Squares Expressions Useful for Theory b2 ¼ b2 þ Swi ei wi ¼ Sðxi À xÞ2 Swi xi ¼ 1; Sw2 ¼ 1=Sðxi À xÞ2 i Properties of the Least Squares Estimators " # Sx2 s2 i varðb1 Þ ¼ s2 varðb2 Þ ¼ 2 N Sðxi À xÞ Sðxi À xÞ2 " # Àx covðb1 ; b2 Þ ¼ s2 Sðxi À xÞ2 Gauss-Markov Theorem: Under the assumptions SR1–SR5 of the linear regression model the estimators b1 and b2 have the smallest variance of all linear and unbiased estimators of b1 and b2. They are the Best Linear Unbiased Estimators (BLUE) of b1 and b2. If we make the normality assumption, assumption SR6, about the error term, then the least squares estimators are normally distributed. ! ! s2 å x2 s2 i ; b2 $ N b2 ; b1 $ N b1 ; N Sðxi À xÞ2 Sðxi À xÞ2 Estimated Error Variance s2 ¼ ^ Type II error: The null hypothesis is false and we decide not to reject it. p-value rejection rule: When the p-value of a hypothesis test is smaller than the chosen value of a, then the test procedure leads to rejection of the null hypothesis. xi À x Swi ¼ 0; Rejection rule for a two-tail test: If the value of the test statistic falls in the rejection region, either tail of the t-distribution, then we reject the null hypothesis and accept the alternative. Type I error: The null hypothesis is true and we decide to reject it. S^2 ei N À2 Prediction y y y0 ¼ b1 þ b2 x0 þ e0 ; ^0 ¼ b1 þ b2 x0 ; f ¼ ^0 À y0 " # qffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 ðx0 À xÞ b b varð f Þ ¼ s2 1 þ þ ^ ; seð f Þ ¼ varð f Þ N Sðxi À xÞ2 A (1 À a)  100% confidence interval, or prediction interval, for y0 ^0 Æ tc seð f Þ y Goodness of Fit Sðyi À yÞ2 ¼ Sð^i À yÞ2 þ S^2 y ei SST ¼ SSR þ SSE SSR SSE ¼1À ¼ ðcorrðy; ^ÞÞ2 y R2 ¼ SST SST Log-Linear Model lnðyÞ ¼ b1 þ b2 x þ e; bÞ ¼ b1 þ b2 x lnð y 100  b2 % % change in y given a one-unit change in x: ^n ¼ expðb1 þ b2 xÞ y ^c ¼ expðb1 þ b2 xÞ expðs2 =2Þ y ^ Prediction interval: h i h i b b exp lnðyÞ À tc seð f Þ ; exp lnð yÞ þ tc seð f Þ Estimator Standard Errors qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi varðb1 Þ varðb2 Þ seðb1 Þ ¼ b; seðb2 Þ ¼ b y Generalized goodness-of-fit measure R2 ¼ ðcorrðy; ^n ÞÞ2 g t-distribution MR1 yi ¼ b1 þ b2xi2 þ Á Á Á þ bKxiK þ ei If assumptions SR1–SR6 of the simple linear regression model hold, then MR3 var( yi) ¼ var(ei) ¼ s2 t¼ bk À bk $ tðN À2Þ ; k ¼ 1; 2 seðbk Þ Interval Estimates P[b2 À tcse(b2) b2 b2 þ tcse(b2)] ¼ 1 À a Hypothesis Testing Components of Hypothesis Tests 1. A null hypothesis, H0 2. An alternative hypothesis, H1 3. A test statistic 4. A rejection region 5. A conclusion If the null hypothesis H0 : b2 ¼ c is true, then t¼ b2 À c $ tðN À2Þ seðb2 Þ Assumptions of the Multiple Regression Model MR2 E( yi) ¼ b1 þ b2xi2 þ Á Á Á þ bKxiK , E(ei) ¼ 0. MR4 cov( yi, yj) ¼ cov(ei, ej) ¼ 0 MR5 The values of xik are not random and are not exact linear functions of the other explanatory variables. MR6 yi $ N ½ðb1 þ b2 xi2 þ Á Á Á þ bK xiK Þ; s2 Š , ei $ N ð0; s2 Þ Least Squares Estimates in MR Model Least squares estimates b1, b2, . . . , bK minimize Sðb1, b2, . . . , bKÞ ¼ åðyi À b1 À b2xi2 À Á Á Á À bKxiKÞ2 Estimated Error Variance and Estimator Standard Errors s2 ¼ ^ å ^2 ei NÀK seðbk Þ ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b varðbk Þ Hypothesis Tests and Interval Estimates for Single Parameters bk À bk $ tðN ÀK Þ Use t-distribution t ¼ seðbk Þ t-test for More than One Parameter H0 : b2 þ cb3 ¼ a b2 þ cb3 À a $ tðN ÀK Þ seðb2 þ cb3 Þ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b b b seðb2 þ cb3 Þ ¼ varðb2 Þ þ c2 varðb3 Þ þ 2c  covðb2 ; b3 Þ t¼ When H0 is true Joint F-tests ðSSER À SSEU Þ=J SSEU =ðN À K Þ To test the overall significance of the model the null and alternative hypotheses and F statistic are F¼ H0 : b2 ¼ 0; b3 ¼ 0; : : : ; bK ¼ 0 H1 : at least one of the bk is nonzero F¼ ðSST À SSEÞ=ðK À 1Þ SSE=ðN À K Þ RESET: A Specification Test yi ¼ b1 þ b2 xi2 þ b3 xi3 þ ei ^i ¼ b1 þ b2 xi2 þ b3 xi3 y yi ¼ b1 þ b2 xi2 þ b3 xi3 þ g1 ^2 þ ei ; yi H0 : g1 ¼ 0 yi yi yi ¼ b1 þ b2 xi2 þ b3 xi3 þ g1 ^2 þ g2 ^3 þ ei ; H0 : g1 ¼ g2 ¼ 0 Model Selection AIC ¼ ln(SSE=N ) þ 2K=N SC ¼ ln(SSE=N ) þ K ln(N )=N Collinearity and Omitted Variables yi ¼ b1 þ b2 xi2 þ b3 xi3 þ ei s2 varðb2 Þ ¼ 2 Þ å ðx À x Þ2 ð1 À r23 i2 2 When x3 is omitted; biasðbÃ Þ 2 ¼ EðbÃ Þ 2 bÞ covðx2 ; x3 À b2 ¼ b3 bÞ varðx2 Heteroskedasticity var( yi) ¼ var(ei) ¼ si2 General variance function s2 ¼ expða1 þ a2 zi2 þ Á Á Á þ aS ziS Þ i Breusch-Pagan and White Tests for H0: a2 ¼ a3 ¼ Á Á Á ¼ aS ¼ 0 When H0 is true x2 ¼ N  R2 $ x2SÀ1Þ ð Goldfeld-Quandt test for H0 : s2 ¼ s2 versus H1 : s2 6¼ s2 M R M R When H0 is true F ¼ s2 =s2 $ FðNM ÀKM ;NR ÀKR Þ ^M ^R Transformed model for varðei Þ ¼ s2 ¼ s2 xi i pffiffiffiffi pffiffiffiffi pffiffiffiffi pffiffiffiffi yi = xi ¼ b1 ð1= xi Þ þ b2 ðxi = xi Þ þ ei = xi Estimating the variance function ¼ Finite distributed lag model yt ¼ a þ b0 xt þ b1 xtÀ1 þ b2 xtÀ2 þ Á Á Á þ bq xtÀq þ vt Correlogram rk ¼ å ðyt À yÞðytÀk À yÞ= å ðyt À yÞ2 pffiffiffiffi For H0 : rk ¼ 0; z ¼ T rk $ N ð0; 1Þ LM test e v yt ¼ b1 þ b2 xt þ r^tÀ1 þ ^t Test H 0 : r ¼ 0 with t-test ^t ¼ g1 þ g2 xt þ r^tÀ1 þ ^t e e v Test using LM ¼ T  R2 yt ¼ b1 þ b2 xt þ et AR(1) error To test J joint hypotheses, lnð^2 Þ ei Regression with Stationary Time Series Variables lnðs2 Þ i Grouped data varðei Þ ¼ s2 ¼ i þ vi ¼ a1 þ a2 zi2 þ Á Á Á þ aS ziS þ vi ( s2 i ¼ 1; 2; . . . ; NM M s2 i ¼ 1; 2; . . . ; NR R Transformed model for feasible generalized least squares  .pffiffiffiffiffi  .pffiffiffiffiffi .pffiffiffiffiffi .pffiffiffiffiffi si ¼ b1 1 ^ si þ ei ^ si ^ si þ b2 xi ^ yi et ¼ retÀ1 þ vt Nonlinear least squares estimation yt ¼ b1 ð1 À rÞ þ b2 xt þ rytÀ1 À b2 rxtÀ1 þ vt ARDL( p, q) model yt ¼ d þ d0 xt þ dl xtÀ1 þ Á Á Á þ dq xtÀq þ ul ytÀ1 þ Á Á Á þ up ytÀp þ vt AR( p) forecasting model yt ¼ d þ ul ytÀ1 þ u2 ytÀ2 þ Á Á Á þ up ytÀp þ vt Exponential smoothing ^t ¼ aytÀ1 þ ð1 À aÞ^tÀ1 y y Multiplier analysis d0 þ d1 L þ d2 L2 þ Á Á Á þ dq Lq ¼ ð1 À u1 L À u2 L2 À Á Á Á À up Lp Þ Â ðb0 þ b1 L þ b2 L2 þ Á Á ÁÞ Unit Roots and Cointegration Unit Root Test for Stationarity: Null hypothesis: H0 : g ¼ 0 Dickey-Fuller Test 1 (no constant and no trend): Dyt ¼ gytÀ1 þ vt Dickey-Fuller Test 2 (with constant but no trend): Dyt ¼ a þ gytÀ1 þ vt Dickey-Fuller Test 3 (with constant and with trend): Dyt ¼ a þ gytÀ1 þ lt þ vt Augmented Dickey-Fuller Tests: m Dyt ¼ a þ gytÀ1 þ å as DytÀs þ vt s¼ 1 Test for cointegration e D^t ¼ g^tÀ1 þ vt e Random walk: yt ¼ ytÀ1 þ vt Random walk with drift: yt ¼ a þ ytÀ1 þ vt Random walk model with drift and time trend: yt ¼ a þ dt þ ytÀ1 þ vt Panel Data Pooled least squares regression yit ¼ b1 þ b2 x2it þ b3 x3it þ eit Cluster robust standard errors cov(eit, eis) ¼ cts Fixed effects model b1i not random yit ¼ b1i þ b2 x2it þ b3 x3it þ eit yit À yi ¼ b2 ðx2it À x2i Þ þ b3 ðx3it À x3i Þ þ ðeit À ei Þ Random effects model yit ¼ b1i þ b2 x2it þ b3 x3it þ eit bit ¼ b1 þ ui random yit À ayi ¼ b1 ð1 À aÞ þ b2 ðx2it À ax2i Þ þ b3 ðx3it À ax3i Þ þ và it  ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi q a ¼ 1 À se T s2 þ s2 u e Hausman test h i1=2 bb t ¼ ðbFE;k À bRE;k Þ varðbFE;k Þ À varðbRE;k Þ ...
View Full Document

This note was uploaded on 09/26/2011 for the course ARE 106 taught by Professor Havenner during the Spring '09 term at UC Davis.

Ask a homework question - tutors are online