Now, we show (
2.9
). It is easy to see that
˜
θ
(
B
)
W
*
t
=
˜
θ
(
B
)
˜
θ

1
(
B
)
˜
φ
(
B
)
X
t
=
˜
φ
(
B
)
X
t
. It suffices
to show
{
W
*
t
}
is a white noise. This is left as a
HW
problem.
37
2.6
Properties of
X
n
,
b
γ
X
(
h
)
and
b
ρ
X
(
h
)
2.6.1
For
X
n
Recall that, for observations
x
1
, . . . , x
n
of a time series, the
sample mean
is
x
=
1
n
n
X
t
=1
x
t
.
The
sample auto covariance function
is
b
γ
X
(
h
) =
1
n
n

h

X
t
=1
(
x
t
+

h


x
)(
x
t

x
)
,
for

n < h < n.
The
sample autocorrelation function (sample ACF)
is
b
ρ
X
(
h
) =
b
γ
X
(
h
)
b
γ
X
(0)
.
Estimation of
μ
X
: The moment estimator of the mean
μ
X
of a stationary process
{
X
t
}
is the
sample mean
X
n
=
n

1
n
X
t
=1
X
t
.
(2.10)
Obviously, it is unbiased; i.e., E(
X
n
) =
μ
X
. Its mean squared error is
Var(
X
n
) =E(
X
n

μ
X
)
2
=
n

2
n
X
i
=1
n
X
j
=1
Cov(
X
i
, X
j
) =
n

2
n
X
i
=1
n
X
j
=1
γ
X
(
i

j
)
=
n

2
n
X
i

j
=

n
(
n
 
i

j

)
γ
X
(
i

j
) =
n

1
n
X
h
=

n
1


h

n
γ
X
(
h
)
=
γ
X
(0)
n

{z
}
is Var(
X
n
) when
{
X
t
}
are iid
+
2
n
n

1
X
h
=1
1


h

n
γ
X
(
h
)
.
•
Depending on the nature of the correlation structure, the standard error of
X
n
may be smaller
or larger than the white noise case.
–
Consider
X
t
=
μ
+
W
t

0
.
8
W
t

1
, where
{
W
t
} ∼
WN(0
, σ
2
), then
Var(
X
n
) =
γ
X
(0)
n
+
2
n
n

1
X
h
=1
1


h

n
γ
X
(
h
) =
1
.
64
σ
2
n

1
.
6(
n

1)
σ
2
n
2
<
1
.
64
σ
2
n
.
38
–
And if
X
t
=
μ
+
W
t
+ 0
.
8
W
t

1
, where
{
W
t
} ∼
WN(0
, σ
2
), then
Var(
X
n
) =
γ
X
(0)
n
+
2
n
n

1
X
h
=1
1


h

n
γ
X
(
h
) =
1
.
64
σ
2
n
+
1
.
6(
n

1)
σ
2
n
2
>
1
.
64
σ
2
n
.
•
If
γ
X
(
h
)
→
0 as
h
→ ∞
, we have

Var(
X
n
)
 ≤
γ
X
(0)
n
+ 2
∑
n
h
=1

γ
X
(
h
)

n
→
0
as
n
→ ∞
.
Thus,
X
n
converges in mean square to
μ
.
•
If
∑
∞
h
=
∞

γ
X
(
h
)

<
∞
, then
n
Var(
X
n
) =
n
X
h
=

n
1


h

n
γ
X
(
h
) =
γ
X
(0) + 2
∑
n
h
=1
(
n

h
)
γ
X
(
h
)
n
=
γ
X
(0) + 2
∑
n

1
h
=1
∑
h
i
=1
γ
X
(
i
)
n
→
γ
X
(0) + 2
∞
X
i
=1
γ
X
(
i
) =
∞
X
h
=
∞
γ
X
(
h
) =
γ
X
(0)
∞
X
h
=
∞
ρ
X
(
h
)
.
One interpretation could be that, instead of Var(
X
n
)
≈
γ
X
(0)
/n
, we have Var(
X
n
)
≈
γ
X
(0)
/
(
n/τ
)
with
τ
=
∑
∞
h
=
∞
ρ
X
(
h
).
The effect of the correlation is a reduction of sample size from
n
to
n/τ
.
Example 2.10.
For linear processes, i.e., if
X
t
=
μ
+
∑
∞
j
=
∞
ψ
j
W
t

j
with
∑
∞
j
=
∞

ψ
j

<
∞
,
then
∞
X
h
=
∞

γ
X
(
h
)

=
∞
X
h
=
∞

σ
2
∞
X
j
=
∞
ψ
j
ψ
j
+
h

≤
∞
X
h
=
∞
σ
2
∞
X
j
=
∞

ψ
j
 · 
ψ
j
+
h

=
σ
2
∞
X
j
=
∞

ψ
j

∞
X
h
=
∞

ψ
j
+
h

=
σ
2
∞
X
j
=
∞

ψ
j

2
<
∞
To make inference about
μ
X
(e.g., is
μ
X
= 0?), using the sample mean
X
n
, it is necessary to know
the asymptotic distribution of
X
n
:
If
{
X
t
}
is Gaussian stationary time series, then, for any
n
,
√
n
(
X
n

μ
X
)
∼
N
0
,
n
X
h
=

n
1


h

n
γ
X
(
h
)
!
.
39
Then one can obtain exact confidence intervals of estimating
μ
X
, or approximated confidence intervals
if it is necessary to estimate
γ
X
(
·
).
For the linear process,
X
t
=
μ
+
∑
∞
j
=
∞
ψ
j
W
t

j
with
{
W
t
} ∼
IID(0
, σ
2
),
∑
∞
j
=
∞

ψ
j

<
∞
and
∑
∞
j
=
∞
ψ
j
6
= 0, then
√
n
(
X
n

μ
X
)
∼
AN(0
, ν
)
,
(2.11)
where
ν
=
∑
∞
h
=
∞
γ
X
(
h
) =
σ
2
(
∑
∞
j
=
∞
ψ
j
)
2
.
You've reached the end of your free preview.
Want to read all 150 pages?
 Spring '15
 Dewei Wang
 Statistics, Stationary process, ACF, Xt, γx