This is just a mathematical way of saying:
EE 562a
Homework Solutions 3
February 16, 2011
5
•
The first component of
Re
{
x
(
u, t
)
}
is a Gaussian random variable with mean
m
and
variance
σ
2
.
•
The second component of
Re
{
x
(
u, t
)
}
is zero with probability one.
•
The first component of
Im
{
x
(
u, t
)
}
is zero with probability one.
•
The second component of
Im
{
x
(
u, t
)
}
is equal to 2
m

(the first component of
Re
{
x
(
u, t
)
}
) with probability one.
(c) Our solution for part (a) was
x
o
(
u
) =
σa
(
u,
1)
1

i
,
so the pseudocovariance (unconjugated covariance) is
e
K
x
=
σ
2
1

i

i

1
,
which is not the allzeros matrix
O
.
This suggests that we need more than one real
random variable to generate
x
(
u
) with
e
K
x
=
O
. Necessary and sufficient conditions for
e
K
x
=
O
are
K
r
=
K
y
=
1
2
Re
{
K
x
}
=
σ
2
2
1
0
0
1
K
yr
=
1
2
Im
{
K
x
}
=
σ
2
2
0
1

1
0
.
The requirement on the correlation matrices means that the components of both
r
o
(
u
)
and
y
o
(
u
) are uncorrelated, while the cross correlation matrix implies
E
{
y
0
(
u,
1)
r
0
(
u,
1)
}
=
E
{
y
0
(
u,
2)
r
0
(
u,
2)
}
= 0
E
{
y
0
(
u,
1)
r
0
(
u,
2)
}
=
σ
2
2
=
⇒
y
0
(
u,
1) =
r
0
(
u,
2) with probability 1
E
{
y
0
(
u,
2)
r
0
(
u,
1)
}
=

σ
2
2
=
⇒
y
0
(
u,
2) =

r
0
(
u,
1) with probability 1
.
It is clear that two real random variables are required.
r
o
(
u
)
=
σ
√
2
a
(
u,
1)
a
(
u,
2)
y
o
(
u
)
=
0
1

1
0
r
o
(
u
) =
σ
√
2
a
(
u,
2)

a
(
u,
1)
.
Adding in the mean value gives
r
(
u
)
=
r
o
(
u
) +
m
0
y
(
u
)
=
y
o
(
u
) +
0
m
.
It is now clear that
x
(
u
) =
r
(
u
) + i
y
(
u
) has the desired second order description. The
number of real random variables required to simulate the vector
x
(
u
) in this case is two
.
6
EE 562a
Homework Solutions 3
February 16, 2011
(d) Let me call the complex random vector
x
(
u
) for this part of the problem as well, and
again
x
(
u
) =
r
(
u
) + i
y
(
u
)
.
For this part of the problem
K
x
is nonsingular and the pseudocovariance is
O
, so it
will require 4 real random variables to generate
x
(
u
). The design follows the standard
method.
Let
x
o
(
u
) =
Hw
(
u
), where
w
(
u
) is a 2dimensional complex random vector
with
m
w
=
0
,
K
w
=
I
and
e
K
w
=
O
. Such a
w
(
u
) can be constructed as follows:
w
(
u
) =
w
(
u,
1)
w
(
u,
2)
=
1
√
2
a
(
u,
1) + i
a
(
u,
3)
a
(
u,
2) + i
a
(
u,
4)
.
We solve for
H
from the relation
K
x
=
HH
†
.
You can perform this using either the
direct, eigen or
LDL
†
methods.
Here is the sequence of eliminations for the
LDL
†
method.
σ
2
3
i

i
3
→
σ
2
3
i
0
8
/
3
=
DL
†
.
We take
H
=
LD

1
/
2
to get
H
=
σ
√
3
0

i
/
√
3
p
8
/
3
.
In order to find the probability density function, you need to have read the notes. In
particular equation (4.254) on page 88 of the notes yields
p
x
(
u
)
(
z
) =
1
π
2

K
x

exp[(
z

m
x
)
†
K

1
x
(
z

m
x
)]
,
where
z
∈
C
2
. This holds only when
e
K
x
=
O
.
The inverse is simple.
K

1
x
=
1
8
σ
2
3

i
i
3
.
4.
(a) This question requires you to interpret random variables as elements in an inner product
space. Using the given inner product definition and the metric that it induces, it follows
that
i.

x
(1)

= (
x
(1)
, x
(1))
1
/
2
= [
E
{
x
2
(
u,
1)
}
]
1
2
=
(
1
2
)
1
/
2
=
1
√
2
ii.
meansquared length of
x
(
u
) =
E
{
x
2
(
u
)
}
=
Tr
(
K
x
) = 2
1
8
(b)
o
1
=
x
1
w
1
=
1
(
o
1
,
o
1
)
1
2
·
o
1
o
2
=
x
2

(
x
2
,
w
1
)
w
1
=
x
2

(
x
2
,
o
1
)
(
o
1
,
o
1
)
·
o
1
w
2
=
1
(
o
2
,
o
2
)
1
2
·
o
2
o
3
=
x
3

(
x
3
,
w
1
)
w
1

(
x
3
,
w
2
)
w
2
=
x
3

(
x
3
,
o
1
)
(
o
1
,
o
1
)
·
o
1

(
x
3
,
o
2
)
(
o
2
,
o
2
)
·
o
2
w
3
=
1
(
o
3
,
o
3
)
1
2
·
o
3
EE 562a
Homework Solutions 3
February 16, 2011
7
You've reached the end of your free preview.
Want to read all 14 pages?
 Fall '07
 ToddBrun
 Variance, Probability theory, probability density function