This preview shows pages 1–3. Sign up to view the full content.
This is Google's cache of
http://onlinecourses.science.psu.edu/stat510/node/27
. It is a snapshot of the page as
it appeared on 7 Aug 2010 06:05:04 GMT. The
current page
could have changed in the meantime.
Learn more
Textonly version
STAT 510  Applied Time Series Analysis
•
ANGEL
•
Department of Statistics
•
Eberly College of Science
Home
//
Section 2: Time Domain Models
State Space Models
Submitted by gfj100 on Sun, 03/28/2010  15:43
Regression in Time Series with Dependent Errors
We now turn to regression in time series, regression with dependent errors. Here is our basic time series
model:
z
t
 the exogenous variable, information that is given (independent variables in regression)
x
t
 the observation (dependent variables in regression)
y
t
 we assume this is zero mean, stationary (Gaussian) time series
In particular, we want to find β. For now, we don't really care what is going on with the
y
t
. If we do not
have
iid
errors here and we plug what we have into R and use the
lm
function, it would return a β.
However, this doesn't mean that your estimate is any good. R just gave us something which might be
wrong. Certainly things like the
p
values and standard errors cannot be trusted if the errors are correlated.
We talked about how, if this dependency is not outrageously strong, then our estimate for β is probably
decent. The standard errors, however, could be bad and prediction intervals could be bad  we can't trust
anything that depends on the distribution. Geometrically it should give us a reasonable estimate for β, but
this doesn't mean other aspects of the fit are good.
We would like to be in a situation where everything is correct, i.e., we properly adjust for the dependence.
The formula above is the univariate case. Generally, we can have more than one covariate and then it
would look more like this multivariate model:
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document where
z
t
and β are the vectors of exogenous variables and parameters:
and
One thing we could do is look at simply dependency on time as above. We also might look at
something such as unemployment (
x
t
) and GDP (
z
t
).
For our regression, we want is for
y
t
not to be a general stationary process, but to be white noise. Assume
for a minute that
y
t
is an
AR
(2). Let's also assume that you know the
AR
coefficients φ
1
and φ
2
:
How can we convert
y
t
into white noise? Can we multiply our
y
observations
y
1
,
y
2
,
y
3
, .
.. by something?
What if we multiplied them by φ(
B
):
We get the following transformation:
, which then gives us
, which is equal to
w
t
.
This is how we can 'whiten' our time series.
Now, let's go back to our regression.
I want to make this transform so that
y
t
is no longer
y
t
but that it is
w
t
. If it is
w
t
we are in a regular
regression situation with
iid
errors. We can multiply everything by φ(
B
):
Now, φ(
B
)
y
t
is
w
t
correct?
Are we still doing the same regression? We started out in our problem looking for dependency between
This is the end of the preview. Sign up
to
access the rest of the document.
This note was uploaded on 09/10/2010 for the course STAT 510 at Pennsylvania State University, University Park.
 '08
 staff

Click to edit the document details