ECE 6010
Lecture 9 – Linear Minimum MeanSquare Error Filtering
Background
Recall that for random variable
X
and
Y
with finite variance, the MSE
E
[(
X

h
(
Y
))
2
]
is
minimized by
h
(
Y
) =
E
[
X

Y
]
. That is, the best estimate of
X
using a measured value of
Y
is to find the conditional average of
X
. One aspect of this estimate is that:
The error is orthogonal to the data.
More precisely, the error
X

E
[
X

Y
]
is orthogonal to
Y
and
to every function of
Y
:
E
[(
X

E
[
X

Y
])
g
(
Y
)] = 0
for all measurable functions
g
. We will assume that
E
[
g
2
(
Y
)]
<
∞
.
We want to show that
h
minimizes
E
[(
X

h
(
Y
))
2
]
if and only if
E
[(
X

h
(
Y
))
g
(
Y
)] =
0
(orthogonality), for all measurable
g
such that
E
[
g
2
(
Y
)]
<
∞
.
E
[(
X

E
[
X

Y
])
g
(
Y
)] =
E
[
E
[(
X

E
[
X

Y
])

Y
]
g
(
Y
)] =
E
[(
E
[
X

Y
]

E
[
X

Y
])
g
(
Y
)] = 0
.
Conversely, suppose for some
g
,
E
[(
X

h
(
Y
))
g
(
Y
)]
6
= 0
. Consider the estimate
ˆ
h
(
Y
) =
h
(
Y
) +
αg
(
Y
)
,
where
α
=
E
[(
X

h
(
Y
))
g
(
Y
)]
E
[
g
2
(
Y
)]
.
Then
E
[(
X

ˆ
h
(
Y
))
2
] =
E
[(
X

h
(
Y
))
2
]

(
E
[(
X

h
(
Y
))
g
(
Y
)])
2
E
[
g
2
(
Y
)]
< E
[(
X

h
(
Y
))
2
]
.
Suppose now we are given two random processes
{
X
t
}
and
{
Y
t
}
that are statistically
related (that is, not independent). Suppose, to begin, that
T
=
R
. Suppose we observe
Y
over the interval
[
a, b
]
, and based on the information gained we want to estimate
X
t
for
some fixed
t
as a function of
{
Y
t
, a
≤
t
≤
b
}
. That is, we form
ˆ
X
t
=
f
(
{
Y
τ
, a
≤
τ
≤
b
}
)
for some functional
f
mapping the function to real numbers.
If
t < b
: We say that the operation of the function is
smoothing
.
If
t
=
b
: We way that the operation of the function is
filtering
.
If
t > b
: We way that the operation of the function is
prediction
.
The error in the estimate is
X
t

ˆ
X
t
. The meansquared error is
E
[(
X
t

ˆ
X
t
)
2
]
.
Fact (built on our previous intuition): The MSE
E
[(
X
t

ˆ
X
t
)
2
]
is minimized by the
conditional expectation
ˆ
X
(
t
) =
E
[
X
t

Y
τ
, a
≤
τ
≤
b
]
.
Furthermore, the orthogonality principle applies:
X
t

E
[
X
t

Y
τ
, a
≤
τ
≤
b
]
is orthogonal
to every function of
{
Y
τ
, a
≤
τ
≤
b
}
.
While we know the theoretical result, it is difficult in general to compute the desired
conditional expectation.
Definition 1
Suppose
{
Y
t
}
is second order. Let
H
y
be the set of all random variables of
the form
∑
n
i
=1
a
i
Y
t
i
+
c
for
n
∈
Z
and
a
i
, c
∈
R
and
t
i
∈
[
a, b
]
.
2
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
ECE 6010: Lecture 9 – Linear Minimum MeanSquare Error Filtering
2
Note that
H
y
may include infinite sequences, so we assume meansquare limits. The
set
H
y
contains meansquare derivatives, meansquare integrals, and other linear transfor
mations of
{
Y
t
, t
∈
[
a, b
]
}
. (The set
H
y
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '08
 Stites,M
 Xt, Wiener filter, Fredholm integral equation

Click to edit the document details