This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: 4. Simple Regression Models Chapter 4 will expand on concepts introduced in Chapter 3 to cover the following: 1) Estimating parameters using Ordinary Least Squares (OLS) Estimation 2) Hypothesis tests of OLS coefficients 3) Confidence intervals of OLS coefficients 4) Prediction 5) SHAZAM use 4.1 OLS and Goodness of Fit Reviewing from Chapter 3, we have our model: Y i = b 1 + b 2 X i + є i Where: b 1 and b 2 are unknown (nonrandom) coefficients X values are nonrandom The error term, є i , is random with E(є i )=0; no expected error Var(є i )=σ 2 ; constant variance Cov(є i , є j )=0; no covariance between errors 4.1 OLS and Goodness of Fit Aside from our model, we have a data set containing: N observations of X and Y Actual X and Y values Our data combines with our model and assumptions to estimate our coefficients: i N i i N i i i X b Y b X X Y Y X X b 2 1 1 2 1 2 ˆ ˆ ) ( ) ) ( ( ˆ = = ∑ ∑ = = 4.1 Predicted and Error Using our estimated coefficients and ACTUAL X values, we obtain ESTIMATED or PREDICTED Y values: Using these predicted values, we can estimate error or the residual: i i X b b Y 2 1 ˆ ˆ ˆ + = i i i Y Y e ˆ ˆ = 4.1.1 Deriving OLS OLS is obtained by minimizing the sum of the square errors. This is done using the partial derivative ∑ ∑ ∑ ∑ ∑ < = = ∂ ∂ < = = ∂ ∂ = = 2 ) ˆ ( 2 ˆ 1 ˆ 2 ˆ ˆ : min 2 1 2 2 2 1 1 2 2 1 1 2 ˆ , ˆ 2 1 i i i i i i i i i i N i i b b X X b b Y b e X b b Y b e X b b Y e where e N N N N N N N 4.1.1 Deriving OLS These can simplify to: ∑...
View
Full
Document
This note was uploaded on 03/14/2009 for the course ECON ECON 299 taught by Professor Priemaza during the Spring '08 term at University of Alberta.
 Spring '08
 Priemaza

Click to edit the document details