This preview shows page 1. Sign up to view the full content.
Unformatted text preview: D.S.G. POLLOCK: TOPICS IN TIMESERIES ANALYSIS
THE FOURIER DECOMPOSITION OF A TIME SERIES
In spite of the notion that a regular trigonometrical function is an inappropriate
means for modelling an economic cycle other than a seasonal ﬂuctuation, there
are good reasons for explaining a data sequence in terms of such functions.
The Fourier decomposition of a series is a matter of explaining the series
entirely as a composition of sinusoidal functions. Thus it is possible to represent
the generic element of the sample as
n (1) yt = αj cos(ωj t) + βj sin(ωj t) .
j =0 Assuming that T = 2n is even, this sum comprises T functions whose frequencies
(2) ωj = 2πj
,
T j = 0, . . . , n = T
2 are at equally spaced points in the interval [0, π ].
As we might infer from our analysis of a seasonal ﬂuctuation, there are
as many nonzeros elements in the sum under (1) as there are data points,
for the reason that two of the functions within the sum—namely sin(ω0 t) =
sin(0) and sin(ωn t) = sin(πt)—are identically zero. It follows that the mapping
from the sample values to the coeﬃcients constitutes a onetoone invertible
transformation. The same conclusion arises in the slightly more complicated
case where T is odd.
The angular velocity ωj = 2πj/T relates to a pair of trigonometrical components which accomplish j cycles in the T periods spanned by the data. The
highest velocity ωn = π corresponds to the socalled Nyquist frequency. If a
component with a frequency in excess of π were included in the sum in (1), then
its eﬀect would be indistinguishable from that of a component with a frequency
in the range [0, π ]
To demonstrate this, consider the case of a pure cosine wave of unit amplitude and zero phase whose frequency ω lies in the interval π < ω < 2π . Let
ω ∗ = 2π − ω . Then
cos(ωt) = cos (2π − ω ∗ )t
(3) = cos(2π ) cos(ω ∗ t) + sin(2π ) sin(ω ∗ t)
= cos(ω ∗ t); which indicates that ω and ω ∗ are observationally indistinguishable. Here,
ω ∗ ∈ [0, π ] is described as the alias of ω > π .
1 FOURIER DECOMPOSITION
For an illustration of the problem of aliasing, let us imagine that a person
observes the sea level at 6am. and 6pm. each day. He should notice a very
gradual recession and advance of the water level; the frequency of the cycle
being f = 1/28 which amounts to one tide in 14 days. In fact, the true frequency
is f = 1 − 1/28 which gives 27 tides in 14 days. Observing the sea level every
six hours should enable him to infer the correct frequency.
Calculation of the Fourier Coeﬃcients
For heuristic purposes, we can imagine calculating the Fourier coeﬃcients
using an ordinary regression procedure to ﬁt equation (1) to the data. In
this case, there would be no regression residuals, for the reason that we are
‘estimating’ a total of T coeﬃcients from T data points; so we are actually
solving a set of T linear equations in T unknowns.
A reason for not using a multiple regression procedure is that, in this case,
the vectors of ‘explanatory’ variables are mutually orthogonal. Therefore, T
applications of a univariate regression procedure would be appropriate to our
purpose.
Let cj = [c0j , . . . , cT −1,j ] and sj = [s0,j , . . . , sT −1,j ] represent vectors of
T values of the generic functions cos(ωj t) and sin(ωj t) respectively. Then there
are the following orthogonality conditions: (4) ci cj = 0
si sj = 0
ci sj = 0 if i = j,
if i = j,
for all i, j. In addition, there are the following sums of squares: (5) c0 c0 = cn cn = T,
s0 s0 = sn sn = 0,
T
cj cj = sj sj = .
2 The ‘regression’ formulae for the Fourier coeﬃcients are therefore
(6) (7) (8) α0 = (i i)−1 i y = αj = (cj cj )−1 cj y = βj = (sj sj )−1 sj y =
2 1
T
2
T
2
T yt = y ,
¯
t yt cos ωi t,
t yt sin ωj t.
t D.S.G. POLLOCK: TOPICS IN TIMESERIES ANALYSIS
By pursuing the analogy of multiple regression, we can understand that
there is a complete decomposition of the sum of squares of the elements of y
which is given by
2
y y = α0 i i + (9) 2
αj cj cj +
j 2
βj sj sj .
j 2
Now consider writing α0 i i = y 2 i i = y y where y = [¯, . . . , y ] is the vector
¯
¯¯
¯
y
¯
2
whose repeated element is the sample mean y . It follows that y y − α0 i i =
¯
¯¯
¯
¯
y y − y y = (y − y ) (y − y ). Therefore, we can rewrite the equation as (y − y ) (y − y ) =
¯
¯ (10) T
2 2
2
αj + βj =
j T
2 ρ2 ,
j
j and it follows that we can express the variance of the sample as
1
T
(11) T −1 (yt − y )2 =
¯
t=0 1
2 n
2
2
(αj + βj )
j =1 2
=2
T 2 yt cos ωj t
j t 2 + yt sin ωj t . t The proportion of the variance which is attributable to the component at fre2
2
quency ωj is (αj + βj )/2 = ρ2 /2, where ρj is the amplitude of the component.
j
The number of the Fourier frequencies increases at the same rate as the
sample size T . Therefore, if the variance of the sample remains ﬁnite, and
if there are no regular harmonic components in the process generating the
data, then we can expect the proportion of the variance attributed to the
individual frequencies to decline as the sample size increases. If there is such
a regular component within the process, then we can expect the proportion of
the variance attributable to it to converge to a ﬁnite value as the sample size
increases.
In order provide a graphical representation of the decomposition of the
sample variance, we must scale the elements of equation (11) by a factor of T .
2
2
The graph of the function I (ωj ) = (T /2)(αj + βj ) is know as the periodogram.
There are many impressive examples where the estimation of the periodogram has revealed the presence of regular harmonic components in a data
series which might otherwise have passed undetected. One of the bestknow
examples concerns the analysis of the brightness or magnitude of the star T.
Ursa Major. It was shown by Whittaker and Robinson in 1924 that this series
could be described almost completely in terms of two trigonometrical functions
with periods of 24 and 29 days.
3 FOURIER DECOMPOSITION 40
30
20
10 0 π/4 π/2 3π/4 π Figure 3. The periodogram of Wolfer’s Sunspot Numbers 1749–1924. The attempts to discover underlying components in economic timeseries
have been less successful. One application of periodogram analysis which was a
notorious failure was its use by William Beveridge in 1921 and 1923 to analyse
a long series of European wheat prices. The periodogram had so many peaks
that at least twenty possible hidden periodicities could be picked out, and this
seemed to be many more than could be accounted for by plausible explanations
within the realms of economic history.
Such ﬁndings seem to diminish the importance of periodogram analysis
in econometrics. However, the fundamental importance of the periodogram is
established once it is recognised that it represents nothing less than the Fourier
transform of the sequence of empirical autocovariances.
The Empirical Autocovariances
A natural way of representing the serial dependence of the elements of a
data sequence is to estimate their autocovariances. The empirical autocovariance of lag τ is deﬁned by the formula
(12) 1
cτ =
T T −1 (yt − y )(yt−τ − y ).
¯
¯
t=τ The empirical autocorrelation of lag τ is deﬁned by rτ = cτ /c0 where c0 , which
is formally the autocovariance of lag 0, is the variance of the sequence. The
4 D.S.G. POLLOCK: TOPICS IN TIMESERIES ANALYSIS
autocorrelation provides a measure of the relatedness of data points separated
by τ periods which is independent of the units of measurement.
It is straightforward to establish the relationship between the periodogram
and the sequence of autocovariances.
The periodogram may be written as
(13) I (ωj ) = The identity
construction, t T −1 2
T T −1 2 cos(ωj t)(yt − y )
¯
t=0 2 sin(ωj t)(yt − y )
¯ + . t=0 cos(ωj t)(yt − y ) = t cos(ωj t)yt follows from the fact that, by
¯
t cos(ωj t) = 0 for all j . Expanding the expression in (38) gives I (ωj ) = 2
T (14)
+ cos(ωj t) cos(ωj s)(yt − y )(ys − y )
¯
¯
t s
2
T sin(ωj t) sin(ωj s)(yt − y )(ys − y ) ,
¯
¯
t s and, by using the identity cos(A) cos(B ) + sin(A) sin(B ) = cos(A − B ), we can
rewrite this as
I (ωj ) = (15) 2
T cos(ωj [t − s])(yt − y )(ys − y ) .
¯
¯
t s Next, on deﬁning τ = t − s and writing cτ =
reduce the latter expression to t (yt − y )(yt−τ − y )/T , we can
¯
¯ T −1 (16) I (ωj ) = 2 cos(ωj τ )cτ ,
τ =1−T which is a Fourier transform of the sequence of empirical autocovariances.
An Appendix on Harmonic Cycles
Lemma 1. Let ωj = 2πj/T where j ∈ {0, 1, . . . , T /2} if T is even and j ∈
{0, 1, . . . , (T − 1)/2} if T is odd. Then
T −1 T −1 cos(ωj t) =
t=0 sin(ωj t) = 0.
t=0 Proof. By Euler’s equations, we have
T −1
t=0 1
cos(ωj t) =
2 T −1
t=0 1
exp(i2πjt/T ) +
2
5 T −1 exp(−i2πjt/T ).
t=0 FOURIER DECOMPOSITION
By using the formula 1 + λ + · · · + λT −1 = (1 − λT )/(1 − λ), we ﬁnd that
T −1 exp(i2πjt/T ) =
t=0 1 − exp(i2πj )
.
1 − exp(i2πj/T ) But exp(i2πj ) = cos(2πj ) + i sin(2πj ) = 1, so the numerator in the expression
above is zero, and hence t exp(i2πj/T ) = 0. By similar means, we can show
that t exp(−i2πj/T ) = 0; and, therefore, it follows that t cos(ωj t) = 0. An
analogous proof shows that t sin(ωj t) = 0.
Lemma 2. Let ωj = 2πj/T where j ∈ 0, 1, . . . , T /2 if T is even and j ∈
0, 1, . . . , (T − 1)/2 if T is odd. Then
T −1 cos(ωj t) cos(ωk t) = (a)
t=0
T −1 (b) sin(ωj t) sin(ωk t) =
t=0 0, if j = k ; T
2 , if j = k . 0, if j = k ; T
2 , if j = k . T −1 cos(ωj t) sin(ψk t) = 0 (c) if j = k. t=0 Proof. From the formula cos A cos B = 1 {cos(A + B ) + cos(A − B )} we have
2
T −1 cos(ωj t) cos(ωk t) =
t=0 1
2 1
=
2 {cos([ωj + ωk ]t) + cos([ωj − ψk ]t)}
T −1 {cos(2π [j + k ]t/T ) + cos(2π [j − k ]t/T )} .
t=0 We ﬁnd, in consequence of Lemma 1, that if j = k , then both terms on the RHS
vanish, and thus we have the ﬁrst part of (a). If j = k , then cos(2π [j − k ]t/T ) =
cos 0 = 1 and so, whilst the ﬁrst term vanishes, the second terms yields the
value of T under summation. This gives the second part of (a).
The proofs of (b) and (c) follow along similar lines.
References
Beveridge, Sir W. H., (1921), “Weather and Harvest Cycles.” Economic Journal, 31, 429–452.
Beveridge, Sir W. H., (1922), “Wheat Prices and Rainfall in Western Europe.”
Journal of the Royal Statistical Society, 85, 412–478.
6 D.S.G. POLLOCK: TOPICS IN TIMESERIES ANALYSIS
Moore, H. L., (1914), “Economic Cycles: Their Laws and Cause.” Macmillan:
New York.
Slutsky, E., (1937), “The Summation of Random Causes as the Source of Cyclical Processes.” Econometrica, 5, 105–146.
Yule, G. U., (1927), “On a Method of Investigating Periodicities in Disturbed
Series with Special Reference to Wolfer’s Sunspot Numbers.” Philosophical
Transactions of the Royal Society, 89, 1–64. 7 ...
View
Full
Document
This note was uploaded on 03/02/2012 for the course EC 7087 taught by Professor D.s.g.pollock during the Fall '11 term at Queen Mary, University of London.
 Fall '11
 D.S.G.Pollock

Click to edit the document details