This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Res Ec 797A: Forecasting Order Of Topics And Readings For Lecture 5 (September 19, 2011) Source Pagegs) Topic Cover The Following Material Distributed Last Class BM 1723 Decomposition Material Distributed Today NB 172177 Component Estimation And Forecasting
(I will distribute these pages during class).
Assignment 1 Simple Moving Averages And Seasonality
3 Due September 26, 2011 BM 2429 Simple Exponential Smoothing University of Massachusetts
Department of Resource Economics RESEC 797A Fall 2011
Assignment 3: Due September 26, 2011 SIMPLE MOVING AVERAGES AND SEASONALITY 1. For Variable 2 (which is nonseasonal), compute threeperiod moving averages for the
withinsample period. Next, make a series of onestepahead ex post forecasts for Variable 2
using the procedure outlined on page 22 of the BM notes. Do you need to calculate seasonal
indices? Why or why not? Compute the RMSE for the postsample period only. 2. Provide a comparison of the postsample forecast accuracy when using your simple moving
average approach to provide predictions (from Part 1, above) relative to the naive nochange
method. Use Theil's U statistic. If your U statistic is greater than 1, explain what happened in
Part 1 to make things go wrong. That is, why did the procedure fail? Provide a suitable graph as
part of your answer. 3. Switch your attention to Variable 1. Provide a deseasonalized transformation of Variable 1.
(Use the withinsample data set.) That is, compute either a 2 x 4 moving average if you have
quarterly data or a 2 x 12 moving average if you have monthly data. Explain what you are doing.
What is the centering step doing? What are the weights on each observation that make up the
average? Provide a graph of the original and deseasonalized series superimposed on each other.
Brieﬂy state what the deseasonalized series tells you about this variable. 4. Compute the seasonal factors for Variable 1 using the method described in Newbold and Bos,
page 164 or on pages 1720 of the BM notes. Recall that we start by dividing each actual value
by the value of the transformation that you computed in Part 3. Next, we did some kind of
averaging. Finally, we applied some kind of correction factor. Newbold and Bos, pages 164—
165, suggest that you use the median to calculate the index for each quarter or each month. I
suggest that you follow the more common practice of computing the mean for each quarter or
month, as we did in class. This is certainly easier to do in a spreadsheet as you can write the
formula in a cell, ﬁnd the average of every 4‘h or 12‘h entry in a column, and then copy the
appropriate formula into other cells that use these averages. Again, as a reminder, what value
should the seasonal factors sum to? Do they? If not, remember to do the correction. 5. Do a series of onestepahead forecasts for Variable 1 for both the withinsample and post
sample periods using the decomposition procedure. How does what you are doing here differ
than what you did in Part 1? Compute the RMSE and Theil’s U for both the withinsample and
postsample periods. What do you conclude about the decomposition approach to forecasting?
What leads you to this conclusion? SIMPLE EXPONENTIAL SMOOTHING Overview Exponential smoothing (ES) is a form of ﬁltering or averaging. It uses all of the observations in a
series. In contrast to the method of simple (arithmetic) moving averages, exponential smoothing
puts the greatest weight on the most recent observations. This topic is covered nicely in Newbold and Bos, Chapter 6, pages 183257. Exponential smoothing is a method. Another name for method is “algorithm.” It pays attention to
three key characteristics of a data series. The ﬁrst is its level. All series start somewhere. This is
its initial level. As the series progresses through time, it may or may not retum to this initial level.
The second and third characteristics of a data series are its trend and seasonality, respectively. A
series may or may not exhibit one or both of these characteristics. There are different exponential smoothing algorithms to accommodate (a) level only, (b) level and
trend, and (0) level, trend, and seasonality. The table below presents several of the common
algorithms. We will limit our applications to single exponential smoothing for (a), Holt’s method
for (b), and Winters’ method for (c). All are very ad hoc in nature. A method is chosen on the basis
of viewing the series and observing which characteristics are present. Even though no thought is
given to the process that generates the series being analyzed, these methods can work quite well. Exponential smoothing, types of algorithm
(Page references are to Newbold and Bos.) ONE PARAMETER MULTl—PARAMETER MODIFIED
METHODS BASIC METHODS MULTl—PARAMETER
METHODS
Single exponential Holts (two parameter) for
smoothing trend
pp. 185192 pp. 193199 Double exponential
smoothing (Brown’s) and
discounted least squares
pp. 2 1 7—221 Triple exponential
(Brown’ s)
p. 222 Winters’ (three parameter)
for trend and seasonal
(multiplicative model is commonest)
pp. 199210 Exponential trend
pp. 210—213 Damped trend
pp. 213217 Box and Jenkins came along later and put serious effort into analyzing the underlying structure of
any time series. Their efforts culminated in a true modeling procedure (i.e., ARIMA modeling)
designed to capture any data generating process. The procedure came to be known as the
BoxJenkins (BJ) methodology. As time went on, it was discovered that several of the commonly
used exponential smoothing algorithms were special cases of a class of ARIMA models. It was also discovered that, with fewer decisions to make, exponential can be more robust than BJ.
Chapter 9 in Newbold and Bos deals with ARIMA models. 24 The simple exponential smoothing algorithm Consider the following single parameter forecasting equation:
YtH =Oth+Ot(l—Ot)Yt_1+Ot(l—Ot)2Yt_2 +... (1) As long as l 1  a I < 1, each successive term will have a smaller (absolute) value to multiply by. '
Actually, for series with only non—negative values, the smoothing parameter (sometimes called the
smoothing constant) is O<a<l. This convergent inﬁnite series can also be written as 00
Yt+1 = 0t 2 (1 “)1Yt—i
i=0
Economists use something called the Koyck transformation to convert the above inﬁnite series to a
ﬁnite one. To do so, begin by subtracting 1 from each time subscript in Equation 1: it = onYt_1 + Ot(l — Ot)Yt_2 + Multiply both sides by (1 — 0L) :
(1 — am = a(1— Ot)Yt_1+ a(1— 002 Yt_2 + Substitute the left—hand side of the equation directly above in place of everything after the ﬁrst
term on the righthand side of equation (1): 9H1 = Oth + (1  00% This equation is called the recurrence form of the simple exponential smoothing algorithm.
(See Newbold and Bos, page 187.) Notice that the right hand side of the recurrence form can be
reworked and written as: Qt+1 = {ft + “(Yt — Yt) (Economists like Cagan and Nerlove used this in their partial adjustment models long ago.) Notice that the term (Yt — it) in the equation directly above is deﬁned as el. So, this equation
can be written as: 9H1 = Yt + Otet This equation is called the error correction form of the simple exponential smoothing
algorithm. (See Newbold and Bos, page 192.) It shows that exponential smoothing has a very
simple recursion formula. Some fraction of present error is added to present forecast to produce
the revised forecast. The error correction form can also be written as:
Yt+1 — Yt = Otet (2) 25 Items to note: 1. There is one parameter to estimate, a. If a = l, YHI = Yt , the naive nochange method. If (1 —>0, little weight is given to most recent terms and almost as much is given to distant terms.
This becomes equivalent to a long simple average. 2. In most books, the smoothing equation for exponential smoothing is usually written using “L”
notation as: Yt+1 E Lt = OLYt + (1  OOLt—l where L is the level of the series that is calculated at time t. The level is the smoothed (or averaged)
value of the series and it is unobservable. Try not to get confused by the notation L. It is a forecast for Y in time period t+l that is made in time period t. That is, it is identical to Y (+1 ' 3. For this model YHh = Lt . No matter how far ahead you forecast, the forecast is the same value. So, if we are truly at the end of our data set, the best prediction that we can achieve it the
very last forecast that we made. To appreciate this, return to the error correction form. Surely,
into the future we have no useful information about the size or direction of any irregularities that
take place. So, the best that we can do is to assume that their expected value will be zero. Under
these circumstances, the error correction form shows us what the best forecast will be. 4. The expression error correction form is unfortunate. Do not confuse it with the special case of a
VAR model that is usually called the error correction model. They are not related. Connection with timeseries models Substitute Yt+1 — 8t+1 for Yt+1 in equation (1) and you have a data generating process. By following the same algebraic steps (lag one time period, multiply through by (1  a), and make
substitutions) , you can produce a simpler form, which turns out to be an ARIMA (0, 1, 1) model
(Autoregressive integrated moving average). You don’t know this yet, but we will see this when
we address the BoxJenkins methodology. The end result is: Yt — Yt—l = 8t — (1  OOSt—l (1 — L)Yt = [1 — (1 — on)L]st (3)
Section 9.2 of Newbold and Bos, pp. 424—427, shows the algebra. Single exponential smoothing
is a special case of the ARIMA models estimated using the BoxJenkins procedure. Single exponential smoothing is optimal when the data are generated by the above process.
Newbold and Bos, page 190, (reproduced on the next page) show several synthetic series produced
using equation (3) with various (1 values. Ficuma‘6.2 Generated time series for which simple exponential smoothing yields optimal forecasts.
(a) a = 0. (b) a: 0.5 (c) a = 0.9. (Page 190, from Newbold and Bos) 27 Estimation There are two estimation procedures. One approach, sometimes followed by software programs, is
to convert the exponential smoothing problem into an equivalent ARIMA model and estimate the
model by maximum likelihood. Alternatively, an iterative search for a can be conducted. A value
for (1 is selected that gives the smallest sum of squared errors (SSE). A way to proceed is as
follows. One could do a coarse search to get an idea of the optimal (1, say from 0.05 to 0.95 in
steps of 0.05, followed by a ﬁner search, say in steps of 0.01 in the interval containing the optimum
value. We will use this search method when we demonstrate how to use a spreadsheet to do the
procedure and also in Assignment 4. Problems when doing the estimation
1. You need a starting value for the level in the recurrence relation.
2. The smoothed values are unobservable. Steps to deal with the problems above 1. Calculate the initial smoothed estimate, Lo, by averaging the ﬁrst few observations. (Using L0 =
Y] and starting the search with the second observation works about as well.) [Note: this is not the method used in most commercial forecasting software, which uses
backcasting to get the initial smoothed value. See Newbold and Bos, pp. 227229.] 2. Using the initial estimate calculated in Step 1 and a value for a. of 0.01 (or an educated guess),
smooth the entire withinsample set of observations. Since the smoothed values are the same as the
forecasts, calculate the sum of the onestep ahead squared forecast errors, SSE = 2(Yl — Lt1)2. Repeat Step 2 using a different (1 until the “best” a is found, that is, the one that generates the
smallest SSE. Forecasting The onestepahead forecast for period t+1, is L. The hsteps ahead forecast is also Lt (because
your estimate of the error is its expected value of zero). In contrast with the naive nochange
method, the exponential smoothing forecast is an,average of past values. Single exponential
smoothing can be used on any series, but it works best on a stationary series. If the series has trend,
you can expect it to work terribly. In fact, if a series has trend and you permit the search for a to
go beyond the 0.95 limit suggested in the Estimation section above, you would typically ﬁnd the
optimal value for a to be 1 (i.e., a unit root) or even a value greater than 1. There is nothing wrong
with the algorithm if this should occur. It is following your instructions by trying to capture the
behavior of the series with one parameter. Thus, if a series looks like it is not returning to its level
over all of the observations, i.e., if it is trending or has seasonality, another algorithm should be
used that can address these issues. These other algorithms, e. g. Holt’s method or Winters’
method, have additional parameters that assist with a solution. Example
We take an example using data from Table 6.1 on page 187 of Newbold and B05. The table contains 30 years of annual sales data (possibly artiﬁcial). I used the ﬁrst 22 years as my
withinsample data set and reserved the last 8 years for outofsample testing (shown in bold). To 28 initialize the exponential smoothing algorithm, I used the ﬁrst actual value as my ﬁrst forecast, as
Newbold and Bos suggest. I then did a grid search using Excel® (instructions will appear in
Assignment 4) and found that a = 0.3 gave me the lowest sum of squared errors (SSE). I forecast
onestep ahead using a = 0.3 through the last 8 years. Table: Comparison of onestep ahead forecast accuracy, withinsample, of single exponential
smoothing with smoothing parameter 0L=0.3 and naive no—change method Time Actual Forecast Error Uppr p.i. Lwr p.i. Naive NError Alpha SSE 1 103 103.00 2248.41
2 86 103.00 17.00 121.24 84.76 103 17 0.05 2371.79
3 84 97.90 —13.90 116.14 79.66 86 2 0.10 2338.46
4 84 93.73 9.73 111.97 75.49 84 0 0.15 2298.62
5 92 90.81 1.19 109.06 72.57 84 8 0.20 2267.37
6 92 91.17 0.83 109.41 72.92 92 0 0.25 2250.40
7 94 91.42 2.58 109.66 73.17 92 2 0.30 2248.41
8 102 92.19 9.81 110.44 73.95 94 8 0.35 2260.65
1 9 103 95.13 7.87 113.38 76.89 102 0.40 2286.04
10 103 97.49 5.51 115.74 79.25 103 0 0.45 2323.53
11 82 99.15 17.15 117.39 80.90 103 21 0.50 2372.19
12 123 94.00 29.00 112.25 75.76 82 41 0.55 2431.35
13 106 102.70 3.30 120.95 84.46 123 17 0.60 2500.58
14 103 103.69 0.69 121.94 85.45 106 3 0.65 2579.73
15 99 103.48 4.48 121.73 85.24 103 4 0.70 2668.91
16 106 102.14 3.86 120.38 83.89 99 7 0.75 2768.54
17 107 103.30 3.70 121.54 85.05 106 1 0.80 2879.33
18 115 104.41 10.59 122.65 86.16 107 8 0.85 3002.36
19 95 107.59 12.59 125.83 89.34 115 20 0.90 3139.10
20 104 103.81 0.19 122.05 85.57 95 9 0.95 3291.47
21 102 103.87 1.87 122.11 85.62 104 2 1.00 3462.00
22 101 103.31 2.31 121.55 85.06 102 1
23 112 102.61 9.39 120.86 84.37 101 11
24 104 105.43 1.43 123.67 87.19 112 8
25 105 105.00 0.00 123.25 86.76 104 1
26 116 105.00 11.00 123.25 86.76 105 11
27 100 108.30 8.30 126.55 90.06 116 16
28 128 105.81 22.19 124.05 87.57 100 28
29 109 112.47 3.47 130.71 94.22 128 19
30 122 111.43 10.57 129.67 93.18 109 13
RMSE Theil'sU RMSE
Within 10.35 0.81 12.84
Ex post 10.58 0.69 15.32
Stdev 10.60 29 ...
View
Full Document
 Fall '11
 BernardMorzuch

Click to edit the document details