2.
The
larger
the sum of squares,
, the
smaller
the variances of the least squares
estimators and the more
precisely
we can estimate
the unknown parameters.
3.
The larger the sample size
N
, the
smaller
the
variances and covariance of the least squares
estimators.
MAJOR POINTS ABOUT THE VARIANCES AND
COVARIANCES OF b
1
AND b
2
2.4
Assessing the
Least Squares Fit
2.4.4
The Variances and
Covariances of b
1
and b
2
²
³
2
¦
´
x
x
i

Principles of Econometrics, 4t
h
Edition
Page 30
Chapter 2: The Simple Linear Regression Model
Figure 2.11 The influence of variation in the explanatory variable
x
on
precision of estimation
(a) Low
x
variation, low precision (b) High
x
variation, high precision
The variance of b
2
is defined as
>
@
2
2
2
2
)
(
)
var(
b
E
b
E
b
´
2.4
Assessing the
Least Squares Fit
2.4.4
The Variances and
Covariances of b
1
and b
2
Principles of Econometrics, 4t
h
Edition
Page 31
Chapter 2: The Simple Linear Regression Model
2.5
The Gauss-Markov
Theorem
Under the assumptions SR1-SR5 of the linear
regression model, the estimators
b
1
and
b
2
have the
smallest variance of all linear and unbiased
estimators of b
1
and b
2
.
They are the
Best Linear
Unbiased Estimators (BLUE)
of
b
1
and
b
2
GAUSS-MARKOV THEOREM

Principles of Econometrics, 4t
h
Edition
Page 32
Chapter 2: The Simple Linear Regression Model
If we make the normality assumption (assumption
SR6 about the error term) then the least squares
estimators are normally distributed:
(2.17)
(2.18)
Probability Distributions of the Least Squares
Estimators
²
³
2
2
1
1
2
σ
~
β
,
i
i
x
b
N
N
x
x
§
·
¨
¸
¨
¸
´
©
¹
¦
¦
²
³
2
2
2
2
σ
~
β
,
i
b
N
x
x
§
·
¨
¸
¨
¸
´
©
¹
¦
Principles of Econometrics, 4t
h
Edition
Page 33
Chapter 2: The Simple Linear Regression Model
If assumptions SR1-SR5 hold, and if the sample
size
N
is
sufficiently large
, then the least squares
estimators have a distribution that approximates the
normal distributions shown in Eq. 2.17 and Eq. 2.18
2.6
The Probability
Distributions of the
Least Squares
Estimators
A CENTRAL LIMIT THEOREM

Principles of Econometrics, 4t
h
Edition
Page 34
Chapter 2: The Simple Linear Regression Model
Since the expectation is an average value we might consider
estimating
e
2
as the average of the squared error, i.e.
The random errors
e
i
are unobservable. The least squares
residuals are obtained by replacing the unknown parameters by
their least squares estimates, :
Subtractiing with the number of regression parameters in the
denominator of the model produces an unbiased estimator:
so that
2
ˆ
ˆ
2
2
´
¦
N
e
i
V
²
³
2
2
ˆ
σ
σ
E
2.7
Estimating the
Variance of the
Error Term
Variance of Error Term
N
e
i
¦
2
2
ˆ
V
i
i
i
i
i
x
b
b
y
y
y
e
2
1
ˆ
ˆ
´
´
´
Principles of Econometrics, 4t
h
Edition
Page 35
Chapter 2: The Simple Linear Regression Model
2.7.2
Calculations for the
Food Expenditure
Data
2
2
ˆ
304505.2
ˆ
σ
8013.29
2
38
i
e
N
´
¦
Table 2.3 Least Squares Residuals
2.7
Estimating the
Variance of the
Error Term

Principles of Econometrics, 4t
h
Edition
Page 36
Chapter 2: The Simple Linear Regression Model

#### You've reached the end of your free preview.

Want to read all 19 pages?

- Fall '19
- Regression Analysis, $100, $1000, $2000, $10.21, $287.62