using the result that the eigenvectors and eigenvalues of
Σ
occur in pairs, i.e.,
(
v
0
c
, v
0
s
)
0
and (
v
0
s
,

v
0
c
)
0
,
where
v
c

iv
s
denotes the eigenvector of
f
xx
. Show
that
1
2
(
Z

μ
)
0
Σ

1
(
Z

μ
)) = (
X

M
)
*
f

1
(
X

M
)
so
p
(
X
) =
p
(
Z
) and we can identify the density of the complex multivariate
normal variable
X
with that of the real multivariate normal
Z
.
7.2
Prove
b
f
in (
7.6
) maximizes the log likelihood (
7.5
) by minimizing the
negative of the log likelihood
L
ln

f

+
L
tr
{
b
ff

1
}
in the form
L
X
i
(
λ
i

ln
λ
i

1
)
+
Lp
+
L
ln

b
f

,
where the
λ
i
values correspond to the eigenvalues in a simultaneous diago
nalization of the matrices
f
and
ˆ
f
; i.e., there exists a matrix
P
such that
P
*
fP
=
I
and
P
*
b
fP
= diag (
λ
1
, . . . , λ
p
) =
Λ.
Note,
λ
i

ln
λ
i

1
≥
0 with
equality if and only if
λ
i
= 1, implying
Λ
=
I
maximizes the log likelihood
and
f
=
b
f
is the maximizing value.
Section 7.3
490
7 Statistical Methods in the Frequency Domain
7.3
Verify (
7.18
) and (
7.19
) for the meansquared prediction error MSE in
(
7.11
). Use the orthogonality principle, which implies
MSE
=
E
(
y
t

∞
X
r
=
∞
β
0
r
x
t

r
)
y
t
and gives a set of equations involving the autocovariance functions. Then,
use the spectral representations and Fourier transform results to get the final
result. Next, consider the predicted series
b
y
t
=
∞
X
r
=
∞
β
0
r
x
t

r
,
where
β
r
satisfies (
7.13
). Show the ordinary coherence between
y
t
and
b
y
t
is
exactly the multiple coherence (
7.20
).
7.4
Consider the complex regression model (
7.28
) in the form
Y
=
X
B +
V ,
where
Y
= (
Y
1
, Y
2
, . . . Y
L
)
0
denotes the observed DFTs after they have been re
indexed and
X
= (
X
1
, X
2
, . . . , X
L
)
0
is a matrix containing the reindexed in
put vectors. The model is a complex regression model with
Y
=
Y
c

iY
s
, X
=
X
c

iX
s
, B
=
B
c

iB
s
, and
V
=
V
c

iV
s
denoting the representation in
terms of the usual cosine and sine transforms. Show the partitioned real re
gression model involving the 2
L
×
1 vector of cosine and sine transforms, say,
Y
c
Y
s
=
X
c

X
s
X
s
X
c
B
c
B
s
+
V
c
V
s
,
is isomorphic to the complex regression regression model in the sense that the
real and imaginary parts of the complex model appear as components of the
vectors in the real regression model. Use the usual regression theory to verify
(
7.27
) holds. For example, writing the real regression model as
y
=
xb
+
v,
the isomorphism would imply
L
(
b
f
yy

b
f
*
xy
b
f

1
xx
b
f
xy
) =
Y
*
Y

Y
*
X
(
X
*
X
)

1
X
*
Y
=
y
0
y

y
0
x
(
x
0
x
)

1
x
0
y.
Section 7.4
Problems
491
7.5
Consider estimating the function
ψ
t
=
∞
X
r
=
∞
a
0
r
β
t

r
by a linear filter estimator of the form
b
ψ
t
=
∞
X
r
=
∞
a
0
r
b
β
t

r
,
where
b
β
t
is defined by (
7.42
). Show a sufficient condition for
b
ψ
t
to be an
unbiased estimator; i.e.,
E
b
ψ
t
=
ψ
t
,
is
H
(
ω
)
Z
(
ω
) =
I
for all
ω
. Similarly, show any other unbiased estimator satisfying the above
condition has minimum variance (see Shumway and Dean, 1968), so the esti
mator given is a best linear unbiased (BLUE) estimator.