Unformatted text preview: EC3062 ECONOMETRICS
THE MULTIPLE REGRESSION MODEL
Consider T realisations of the regression equation
y = β0 + β1 x1 + · · · + βk xk + ε, (1) which can be written in the following form: y1
1 y2 1 . =. . .
.
.
1
yT (2) x11
x21
.
.
.
xT 1 β0
x1k
ε1
x2k β1 ε2
. . + .
. . .
.
.
.
xT k
εT
βk ...
...
... . This can be represented in summary notation by
(3) y = Xβ + ε. The object is to derive an expression for the ordinary leastsquares
estimates of the elements of the parameter vector β = [β0 , β1 , . . . , βk ] .
1 EC3062 ECONOMETRICS
The ordinary leastsquares (OLS) estimate of β is the value that minimises (4) S (β ) = ε ε
= (y − Xβ ) (y − Xβ )
= y y − y Xβ − β X y + β X Xβ
= y y − 2y X β + β X X β. According to the rules of matrix diﬀerentiation, the derivative is
(5) ∂S
= −2y X + 2β X X.
∂β Setting this to zero gives 0 = β X X − y X , which is transposed to provide
the socalled normal equations:
(6) X X β = X y. On the assumption that the inverse matrix exists, the equations have a
unique solution, which is the vector of ordinary leastsquares estimates:
(7) ˆ
β = (X X )−1 X y .
2 EC3062 ECONOMETRICS
The Decomposition of the Sum of Squares
ˆ
The equation y = X β + e, decomposes y into a regression component
ˆ
ˆ
X β and a residual component e = y − Xβ . These are mutually orthogonal,
ˆ
since (6) indicates that X (y − Xβ ) = 0.
Deﬁne the projection matrix P = X (X X )−1 X , which is symmetric
and idempotent such that
P = P = P2 or, equivalently, P (I − P ) = 0. ˆ
ˆ
Then, X β = P y and e = y − Xβ = (I − P )y , and, therefore, the regression
decomposition is
y = P y + (I − P )y.
The conditions on P imply that
y y = P y + (I − P )y
(8) = y P y + y (I − P )y
ˆ
ˆ
= β X X β + e e.
3 P y + (I − P )y EC3062 ECONOMETRICS
This is an instance of Pythagorus theorem; and the equation indicates
that the total sum of squares y y is equal to the regression sum of squares
ˆ
ˆ
β X X β plus the residual or error sum of squares e e.
By projecting y perpendicularly onto the manifold of X , the distance
ˆ
between y and P y = X β is minimised.
Proof. Let γ = P g be an arbitrary vector in the manifold of X . Then
(9) ˆ
ˆ
(y − γ ) (y − γ ) = (y − X β ) + (X β − γ )
= (I − P )y + P (y − g ) ˆ
ˆ
(y − X β ) + (X β − γ )
(I − P )y + P (y − g ) . The properties of P indicate that
(10) (y − γ ) (y − γ ) = y (I − P )y + (y − g ) P (y − g )
ˆ
ˆ
= e e + (X β − γ ) (X β − γ ). ˆ
ˆ
Since the squared distance (X β − γ ) (X β − γ ) is nonnegative, it follows
ˆ
that (y − γ ) (y − γ ) ≥ e e, where e = y − X β ; which proves the assertion.
4 EC3062 ECONOMETRICS
The Coeﬃcient of Determination
A summary measure of the extent to which the ordinary leastsquares
regression accounts for the observed vector y is provided by the coeﬃcient
of determination. This is deﬁned by
(11) ˆ
ˆ
y Py
β X Xβ
=
.
R=
yy
yy
2 The measure is just the square of the cosine of the angle between the
ˆ
vectors y and P y = X β ; and the inequality 0 ≤ R2 ≤ 1 follows from the
fact that the cosine of any angle must lie between −1 and +1.
If X is a square matrix of full rank, with as many regressors as
observations, then X −1 exists and
P = X (X X )−1 X = X {X −1 X −1 }X = I, and so R2 = 1. If X y = 0, then, P y = 0 and R2 = 0. But, if y is
distibuted continuously, then this event has a zero probability.
5 EC3062 ECONOMETRICS e
y β γ X ^ ˆ
Figure 1. The vector P y = X β is formed by the orthogonal projection of the vector y onto the subspace spanned by the columns of the
matrix X .
6 EC3062 ECONOMETRICS
The Partitioned Regression Model
Consider partitioning the regression equation of (3) to give
(12) y = [ X1 X2 ] β1
+ ε = X1 β1 + X2 β2 + ε,
β2 where [X1 , X2 ] = X and [β1 , β2 ] = β . The normal equations of (6) can
be partitioned likewise:
(13) X1 X1 β1 + X1 X2 β2 = X1 y, (14) X2 X1 β1 + X2 X2 β2 = X2 y. From (13), we get the X1 X1 β1 = X1 (y − X2 β2 ), which gives
(15) ˆ
ˆ
β1 = (X1 X1 )−1 X1 (y − X2 β2 ). ˆ
To obtain an expression for β2 , we must eliminate β1 from equation (14).
For this, we multiply equation (13) by X2 X1 (X1 X1 )−1 to give
(16) X2 X1 β1 + X2 X1 (X1 X1 )−1 X1 X2 β2 = X2 X1 (X1 X1 )−1 X1 y.
7 EC3062 ECONOMETRICS
From
X2 X1 β1 + X2 X2 β2 = X2 y, (14) we take the resulting equation
X2 X1 β1 + X2 X1 (X1 X1 )−1 X1 X2 β2 = X2 X1 (X1 X1 )−1 X1 y (16)
to give
(17) X2 X2 − X2 X1 (X1 X1 )−1 X1 X2 β2 = X2 y − X2 X1 (X1 X1 )−1 X1 y. On deﬁning P1 = X1 (X1 X1 )−1 X1 , equation (17) can be written as
(19) X2 (I − P1 )X2 β2 = X2 (I − P1 )y, whence
(20) ˆ
β2 = X2 (I − P1 )X2 8 −1 X2 (I − P1 )y. EC3062 ECONOMETRICS
The Regression Model with an Intercept
Consider again the equations
(22) y = ια + Zβz + ε. where ι = [1, 1, . . . , 1] is the summation vector and Z = [xtj ], with
t = 1, . . . T and j = 1, . . . , k, is the matrix of the explanatory variables.
This is a case of the partitioned regression equation of (12). By setting
X1 = ι and X2 = Z and by taking β1 = α, β2 = βz , the equations (15)
and (20), give the following estimates of the α and βz :
(23) ˆ
α = (ι ι)−1 ι (y − Z βz ),
ˆ and
ˆ
βz = Z (I − Pι )Z
(24) −1 Pι = ι(ι ι) −1 Z (I − Pι )y, 1
ι = ιι .
T
9 with EC3062 ECONOMETRICS
To understand the eﬀect of the operator Pι , consider
T ιy=
(25) yt ,
t=1 1
(ι ι)−1 ι y =
T T yt = y ,
¯
t=1 ¯
y¯
¯
and Pι y = ιy = ι(ι ι)−1 ι y = [¯, y , . . . , y ] .
y¯
¯
Here, Pι y = [¯, y , . . . , y ] is a column vector containing T repetitions of
the sample mean.
From the above, it can be understood that, if x = [x1 , x2 , . . . xT ] is
vector of T elements, then
T (26) x (I − Pι )x = T xt (xt − x) =
¯
t=1 (xt − x)xt =
¯
t=1 The ﬁnal equality depends on the fact that 10 T (xt − x)2 .
¯
t=1 (xt − x)¯ = x
¯x ¯ (xt − x) = 0.
¯ EC3062 ECONOMETRICS
The Regression Model in Deviation Form
Consider the matrix of crossproducts in equation (24). This is
(27) ¯
¯
Z (I − Pι )Z = {(I − Pι )Z } {Z (I − Pι )} = (Z − Z ) (Z − Z ). ¯
Here, Z contains the sample means of the k explanatory variables repeated
¯
T times. The matrix (I − Pι )Z = (Z − Z ) contains the deviations of the
¯
data points about the sample means. The vector (I − Pι )y = (y − ιy ) may
be described likewise.
−1
ˆ
It follows that the estimate βz = Z (I − Pι )Z
Z (I − Pι )y is
obtained by applying the leastsquares regression to the equation (28) y1 − y
¯
x11 − x1
¯
¯
¯ y2 − y x21 − x1 . =
.
.
.
.
.
xT 1 − x1
¯
¯
yT − y ...
...
... which lacks an intercept term.
11 ε1 − ε
x1k − xk
¯
¯
β1
x2k − xk . ε2 − ε ¯
¯ . + . ,
. . . .
.
.
βk
xT k − xk
¯
¯
εT − ε EC3062 ECONOMETRICS
In summary notation, the equation may be denoted by
(29) ¯
¯
y − ιy = [Z − Z ]βz + (ε − ε).
¯ Observe that it is unnecessary to take the deviations of y . The result
¯
is the same whether we regress y or y − ιy on [Z − Z ]. The result is
¯
due to the symmetry and idempotency of the operator (I − Pι ), whereby
Z (I − Pι )y = {(I − Pι )Z } {(I − Pι )y }.
ˆ
Once the value for βz is available, the estimate for the intercept term
can be recovered from the equation (23), which can be written as
k (30) xj βj .
¯ˆ ¯
α = y − Z βz = y −
ˆ ¯ ¯ˆ
j =1 12 EC3062 ECONOMETRICS
The Assumptions of the Classical Linear Model
Consider the regression equation
(32) y = Xβ + ε, where y = [y1 , y2 , . . . , yT ] , ε = [ε1 , ε2 , . . . , εT ] , β = [β0 , β1 , . . . , βk ] and
X = [xtj ], with xt0 = 1 for all t.
It is assumed that the disturbances have expected values of zero. Thus
(33) E (ε) = 0 or, equivalently, E (εt ) = 0, t = 1, . . . , T. Next, it is assumed that they are mutually uncorrelated and that they
have a common variance. Thus
(34) 2 D(ε) = E (εε ) = σ I, or E (εt εs ) = σ 2 , if t = s;
0, if t = s. If t is a temporal index, then these assumptions imply that there is
no intertemporal correlation in the sequence of disturbances.
13 EC3062 ECONOMETRICS
A conventional assumption, borrowed from the experimental sciences, is
that X is a nonstochastic matrix with linearly independent columns.
Linear independence is necessary in order to distinguish the separate
eﬀects of the k explanatory variables.
In econometrics, it is more appropriate to regard the elements of X
as random variables distributed independently of the disturbances:
(37) E (X εX ) = X E (ε) = 0. Then,
(38) ˆ
β = (X X )−1 X y ˆ
is unbiased such that E (β ) = β. To demonstrate this, we may write
(39) ˆ
β = (X X )−1 X y = (X X )−1 X (Xβ + ε)
= β + (X X )−1 X ε. Taking expectations gives
(40) ˆ
E (β ) = β + (X X )−1 X E (ε)
= β.
14 EC3062 ECONOMETRICS
Notice that, in the light of this result, equation (39) now indicates that
ˆ
ˆ
β − E (β ) = (X X )−1 X ε. (41) The variance–covariance matrix of the ordinary leastsquares regression
estimator is
ˆ
D(β ) = σ 2 (X X )−1 .
This is demonstrated via the following sequence of identities:
E
(43) ˆ
ˆˆ
ˆ
β − E (β ) β − E (β ) = E (X X )−1 X εε X (X X )−1
= (X X )−1 X E (εε )X (X X )−1
= (X X )−1 X {σ 2 I }X (X X )−1
= σ 2 (X X )−1 . The second of these equalities follows directly from equation (41).
15 EC3062 ECONOMETRICS
Matrix Traces
n If A = [aij ] is a square matrix, then Trace(A) = i=1 aii . If A = [aij ]
is of order n × m and B = [bk ] is of order m × n, then
m AB = C = [ci ] with ci = aij bj and j =1
n (45)
BA = D = [dkj ] with dkj = bk a j .
=1 Now, n m aij bji Trace(AB ) = and i=1 j =1
mn (46)
Trace(BA) = n bj a
j =1 =1 j m = a j bj .
=1 j =1 Apart from a change of notation, where replaces i, the expressions on
the RHS are the same. It follows that Trace(AB ) = Trace(BA). For three
factors A, B, C , we have Trace(ABC ) = Trace(CAB ) = Trace(BCA).
16 EC3062 ECONOMETRICS
Estimating the Variance of the Disturbance
It is natural to estimate σ 2 = V (εt ) via its empirical counterpart.
ˆ
With et = yt − xt. β in place of εt , it follows that T −1 t e2 may be used
t
2
to estimate σ . However, it transpires that this is biased. An unbiased
estimate is provided by (48) 1
2
σ=
ˆ
T −k T e2
t
t=1 1
ˆ
ˆ
(y − X β ) (y − X β ).
=
T −k The unbiasedness of this estimate may be demonstrated by ﬁnding the
ˆ
ˆ
expected value of (y − X β ) (y − X β ) = y (I − P )y .
Given that (I − P )y = (I − P )(Xβ + ε) = (I − P )ε in consequence of
the condition (I − P )X = 0, it follows that
(49) ˆ
ˆ
E (y − X β ) (y − X β ) = E (ε ε) − E (ε P ε). 17 EC3062 ECONOMETRICS
The value of the ﬁrst term on the RHS is given by
T E (e2 ) = T σ 2 .
t E (ε ε) = (50) t=1 The value of the second term on the RHS is given by
E (ε P ε) = Trace E (ε P ε) = E Trace(ε P ε) = E Trace(εε P )
(51) = Trace E (εε )P = Trace σ 2 P = σ 2 Trace(P ) = σ 2 k.
The ﬁnal equality follows from the fact that Trace(P ) = Trace(Ik ) = k .
Putting the results of (50) and (51) into (49), gives
(52) ˆ
ˆ
E (y − X β ) (y − X β ) = σ 2 (T − k ); and, from this, the unbiasedness of the estimator in (48) follows directly.
18 EC3062 ECONOMETRICS
Statistical Properties of the OLS Estimator
ˆ
The expectation or mean vector of β , and its dispersion matrix as
well, may be found from the expression
ˆ
β = (X X )−1 X (Xβ + ε) (53) = β + (X X )−1 X ε. The expectation is
(54) ˆ
E (β ) = β + (X X )−1 X E (ε)
= β. ˆ
ˆ
Thus, β is an unbiased estimator. The deviation of β from its expected
ˆ
ˆ
value is β − E (β ) = (X X )−1 X ε. Therefore, the dispersion matrix, which
ˆ
contains the variances and covariances of the elements of β , is
ˆ
D(β ) = E
(55) ˆ
ˆ
β − E (β ) ˆ
ˆ
β − E (β ) = (X X )−1 X E (εε )X (X X )−1
= σ 2 (X X )−1 .
19 EC3062 ECONOMETRICS
ˆ
The Gauss–Markov theorem asserts that β is the unbiased linear estimator of least dispersion. Thus,
(56) ˆ
If β is the OLS estimator of β , and if β ∗ is any other linear unbiased
ˆ
estimator of β , then V (q β ∗ ) ≥ V (q β ), where q is a constant vector. Proof. Since β ∗ = Ay is an unbiased estimator, it follows that E (β ∗ ) =
AE (y ) = AXβ = β , which implies that AX = I . Now write A =
(X X )−1 X + G. Then, AX = I implies that GX = 0. It follows that
D(β ∗ ) = AD(y )A
(57) = σ 2 (X X )−1 X + G X (X X )−1 + G
ˆ
= σ 2 (X X )−1 + σ 2 GG = D(β ) + σ 2 GG . Therefore, for any constant vector q of order k , there is
(58) ˆ
V (q β ∗ ) = q D(β )q + σ 2 q GG q
ˆ
ˆ
≥ q D(β )q = V (q β ); ˆ
and thus the inequality V (q β ∗ ) ≥ V (q β ) is established.
20 EC3062 ECONOMETRICS
Orthogonality and OmittedVariables Bias
Consider the partitioned regression model of equation (12), which was
written as
y = [ X 1 , X2 ] (59) β1
+ ε = X1 β1 + X2 β2 + ε.
β2 Imagine that the columns of X1 are orthogonal to the columns of X2 such
that X1 X2 = 0.
ˆ
In the partitioned form of the formula β = (X X )−1 X y , there would
be
X1
[ X1
X2 (60) X X = X2 ] = X1 X1
X2 X1 X1 X2
X2 X2 = X1 X1
0 0
,
X2 X2 where the ﬁnal equality follows from the condition of orthogonality. The
inverse of the partitioned form of X X in the case of X1 X2 = 0 is
(61) −1 (X X ) X1 X1
=
0 0
X2 X2 −1 21 (X1 X1 )−1
=
0 0
.
(X2 X2 )−1 EC3062 ECONOMETRICS
There is also
Xy= (62) X1
X2 y= X1 y
X2 y . On combining these elements, we ﬁnd that
(63) ˆ
β1
ˆ
β2 = (X1 X1 )−1
0 0 X1 y
−1 (X2 X2 ) X2 y = (X1 X1 )−1 X1 y
−1 (X2 X2 ) X2 y . In this case, the coeﬃcients of the regression of y on X = [X1 , X2 ] can be
obtained from the separate regressions of y on X1 and y on X2 .
It should be recognised that this result does not hold true in general.
ˆ
ˆ
The general formulae for β1 and β2 are those that have been given already
under (15) and (20):
(64) ˆ
ˆ
β1 = (X1 X1 )−1 X1 (y − X2 β2 ),
ˆ
β2 = X2 (I − P1 )X2 −1 X2 (I − P1 )y, 22 P1 = X1 (X1 X1 )−1 X1 . EC3062 ECONOMETRICS
The purpose of including X2 in the regression equation, when our interest
is conﬁned to the parameters of β1 , is to avoid falsely attributing the
explanatory power of the variables of X2 to those of X1 .
If X2 is erroneously excluded, then the estimate of β1 will be
˜
β1 = (X1 X1 )−1 X1 y
(65) = (X1 X1 )−1 X1 (X1 β1 + X2 β2 + ε)
= β1 + (X1 X1 )−1 X1 X2 β2 + (X1 X1 )−1 X1 ε. On applying the expectations operator, we ﬁnd that
(66) ˜
E (β1 ) = β1 + (X1 X1 )−1 X1 X2 β2 , since E {(X1 X1 )−1 X1 ε} = (X1 X1 )−1 X1 E (ε) = 0. Thus, in general, we
˜
˜
have E (β1 ) = β1 , which is to say that β1 is a biased estimator.
The estimator will be unbiased only when either X1 X2 = 0 or β2 = 0.
In other circumstances, it will suﬀer from omittedvariables bias.
23 EC3062 ECONOMETRICS
Restricted LeastSquares Regression
A set of j linear restrictions on the vector β can be written as Rβ = r,
where r is a j ×k matrix of linearly independent rows, such that Rank(R) =
j , and r is a vector of j elements.
To combine this a priori information with the sample information,
the sum of squares (y − Xβ ) (y − Xβ ) is minimised subject to Rβ = r.
This leads to the Lagrangean function
(67) L = (y − Xβ ) (y − Xβ ) + 2λ (Rβ − r)
= y y − 2y X β + β X X β + 2λ Rβ − 2λ r. Diﬀerentiating L with respect to β and setting the result to zero, gives
following ﬁrstorder condition ∂L/∂β = 0:
(68) 2β X X − 2y X + 2λ R = 0. After transposing the expression, eliminating the factor 2 and rearranging,
we have
(69) X X β + R λ = X y.
24 EC3062 ECONOMETRICS
Combining these equations with the restrictions gives
(70) XX
R R
0 β
Xy
.
=
r
λ For the system to given a unique value of β , the matrix X X need not be
invertible—it is enough that the condition
(71) Rank X
R =k should hold, which means that the matrix should have full column rank.
Consider applying OLS to the equation
(72) y
r = X
ε
β+
,
R
0 which puts the equations of the observations and the equations of the
restrictions on an equal footing. An estimator exits on the condition that
(X X + R R)−1 exists, for which the satisfaction of the rank condition is
ˆ
necessary and suﬃcient. Then, β = (X X + R R)−1 (X y + R r).
25 EC3062 ECONOMETRICS
Let us assume that (X X )−1 does exist. Then equation (68) gives an
expression for β in the form of
(73) β ∗ = (X X )−1 X y − (X X )−1 R λ
ˆ
= β − (X X )−1 R λ, ˆ
where β is the unrestricted ordinary leastsquares estimator. Since Rβ ∗ =
r, premultiplying the equation by R gives
(74) ˆ
r = Rβ − R(X X )−1 R λ, from which
(75) ˆ
λ = {R(X X )−1 R }−1 (Rβ − r). On substituting this expression back into equation (73), we get
(76) ˆ
ˆ
β ∗ = β − (X X )−1 R {R(X X )−1 R }−1 (Rβ − r). This formula is an instance of the predictionerror algorithm, whereby the
estimate of β is updated using information provided by the restrictions.
26 EC3062 ECONOMETRICS
ˆ
ˆ
Given that E (β − β ) = 0, which is to say that β is an unbiased estimator,
then, on the supposition that the restrictions are valid, it follows that
E (β ∗ − β ) = 0, so that β ∗ is also unbiased.
Next, consider the expression
ˆ
β ∗ − β = [I − (X X )−1 R {R(X X )−1 R }−1 R](β − β )
(77)
ˆ
= (I − PR )(β − β ),
where
(78) PR = (X X )−1 R {R(X X )−1 R }−1 R. The expression comes from taking β from both sides of (76) and from
ˆ
ˆ
recognising that Rβ − r = R(β − β ). It can be seen that PR is an idempotent matrix that is subject to the conditions that
(79) 2
PR = PR , PR (I − PR ) = 0 and PR X X (I − PR ) = 0. From equation (77), it can be deduced that
ˆ
ˆ
D(β ∗ ) = (I − PR )E {(β − β )(β − β ) }(I − PR )
(80) = σ 2 (I − PR )(X X )−1 (I − PR )
= σ 2 [(X X )−1 − (X X )−1 R {R(X X )−1 R }−1 R(X X )−1 ].
27 EC3062 ECONOMETRICS
Regressions on Trigonometrical Functions
An example of orthogonal regressors is a Fourier analysis, where the
explanatory variables are sampled from a set of trigonometric functions
with angular velocities, called Fourier frequencies, that are evenly distributed in an interval from zero to π radians per sample period.
If the sample is indexed by t = 0, 1, . . . , T − 1, then the Fourier
frequencies are ωj = 2πj/T ; j = 0, 1, . . . , [T /2], where [T /2] denotes the
integer quotient of the division of T by 2.
The object of a Fourier analysis is to express the elements of the
sample as a weighted sum of sine and cosine functions as follows:
[T /2] (81) {αj cos(ωj t) + βj sin(ωj t)} ; yt = α0 + t = 0, 1, . . . , T − 1. j =1 The vectors of the generic trigonometric regressors may be denoted by
(83) cj = [c0j , c1j , . . . cT −1,j ] and where ctj = cos(ωj t) and stj = sin(ωj t).
28 sj = [s0j , s1j , . . . sT −1,j ] , EC3062 ECONOMETRICS
The vectors of the ordinates of functions of diﬀerent frequencies are mutually orthogonal. Therefore, the following orthogonality conditions hold:
if i = j, and ci sj = 0 (84) ci cj = si sj = 0 for all i, j. In addition, there are some sums of squares which can be taken into account in computing the coeﬃcients of the Fourier decomposition: (85) c0 c0 = ι ι = T,
cj cj = sj sj = T /2 s0 s0 = 0,
for j = 1, . . . , [(T − 1)/2] When T = 2n, there is ωn = π and there is also
(86) sn sn = 0, and 29 cn cn = T. EC3062 ECONOMETRICS
The “regression” formulae for the Fourier coeﬃcients can now be
given. First, there is
(87) α0 = (ι ι)−1 ι y = 1
T yt = y .
¯
t Then, for j = 1, . . . , [(T − 1)/2], there are
(88) αj = (cj cj )−1 cj y = 2
T yt cos ωj t,
t and
(89) βj = (sj sj )−1 sj y = 2
T yt sin ωj t.
t If T = 2n is even, then there is no coeﬃcient βn and there is
(90) αn = (cn cn )−1 cn y = 30 1
T (−1)t yt .
t EC3062 ECONOMETRICS
By pursuing the analogy of multiple regression, it can be seen, in view
of the orthogonality relationships, that there is a complete decomposition
of the sum of squares of the elements of the vector y :
[T /2] (91) 2
y y = α0 ι ι + 2
2
αj cj cj + βj sj sj .
j =1 2
¯
¯¯
¯
y¯
¯
Now consider writing α0 ι ι = y 2 ι ι = y y , where y = [¯, y , . . . , y ] is a
vector whose repeated element is the sample mean y . It follows that
¯
2
¯¯
¯
¯
y y − α0 ι ι = y y − y y = (y − y ) (y − y ). Then, in the case where T = 2n
is even, the equation can be written as (92) T
¯
(y − y ) (y − y ) =
¯
2 n−1
2
2
αj + βj
j =1 T
2
+ T αn =
2 n ρ2 .
j
j =1 2
2
where ρj = αj + βj for j = 1, . . . , n − 1 and ρn = 2αn . A similar expression
exists when T is odd, with the exceptions that αn is missing and that the
summation runs to (T − 1)/2. 31 EC3062 ECONOMETRICS
It follows that the variance of the sample can be expressed as
(93) 1
T T −1 1
2
(yt − y ) =
¯
2
t=0 n
2
2
(αj + βj ).
j =1 The proportion of the variance that is attributable to the component at
2
2
frequency ωj is (αj + βj )/2 = ρ2 /2, where ρj is the amplitude of the
j
component.
The number of the Fourier frequencies increases at the same rate as
the sample size T , and, if there are no regular harmonic components in
the underling process, then we can expect the proportion of the variance
attributed to the individual frequencies to decline as the sample size increases.
If there is a regular component, then we can expect the the variance
attributable to it to converge to a ﬁnite value as the sample size increases.
In order provide a graphical representation of the decomposition of
the sample variance, we must scale the elements of equation (36) by a
2
2
factor of T . The graph of the function I (ωj ) = (T /2)(αj + βj ) is know as
the periodogram.
32 EC3062 ECONOMETRICS 5.4
5.2
5
4.8
0 25 50 75 100 125 Figure 2. The plot of 132 monthly observations on the U.S. money
supply, beginning in January 1960. A quadratic function has been
interpolated through the data. 33 EC3062 ECONOMETRICS 0.015
0.01
0.005
0
0 π/4 π/2 3π/4 π Figure 3. The periodogram of the residuals of the logarithmic moneysupply data. 34 ...
View
Full Document
 Spring '12
 D.S.G.Pollock
 Econometrics, Regression Analysis, X1, β

Click to edit the document details