Lecture 3. A Simple Linear Regression Model
A simple (onevariable) linear regression (SLR) model is given by
Y
j
=
β
0
+
β
0
X
j
+
²
j
,
j
= 1
, . . . , n,
(3.1)
where
Y
j
is a dependent variable and
X
j
is an independent (explanatory) variable;
β
0
and
β
1
are called an intercept and a slop respectively.
The GaussMarkov Theorem
. If
²
j
are uncorrelated random variables with common variance,
then of all possible estimators
β
*
0
and
β
*
1
that are linear functions of
Y
t
, the least squares
(LS) estimators have the smallest variance.
Using historical data
Y
1
, . . . , Y
N
we want to obtain optimal estimates
β
*
0
and
β
*
1
and further
use this information to obtain the forecasts of
Y
.
Let us denote the estimated (historical) values of
Y
j
by
Y
*
j
for
j
= 1
, . . . , n
, i.e.
Y
*
=
β
*
0
+
β
*
1
X
j
.
Then using the LS approach, we minimize the sum of squares of estimated residuals (SSE)
SSE =
n
X
j
=1
‡
Y
j

Y
*
j
·
2
=
n
X
j
=1
(
Y
j

β
*
0

β
*
1
X
j
)
2
(3.2)
Hence, we need to obtain the first partial derivatives of SSE with respect to each of
β
*
0
and
β
*
1
and set them both equal to 0, solving simultaneously. The solution is given by
β
*
0
=
Y
n

β
*
1
¯
x
n
(3.3)
and
β
*
1
=
S
XY
SS
X
=
∑
n
j
=1
‡
X
j

X
n
· ‡
Y
j

Y
n
·
∑
n
j
=1
‡
X
j

X
n
·
2
(3.4)
It is clear from 3.3 that the two estimates
β
*
0
and
β
*
1
are related. (In fact, we should not
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '08
 YuliaGel
 Linear Regression, Regression Analysis, xj, square footage, yj

Click to edit the document details