ECE 5510: Random Processes
Lecture Notes
Fall 2009
Lecture 12
Today: (1) Joint r.v. Expectation Review (2) Transformations
of Joint r.v.s, Y&G 4.6 (3) Random Vectors (R.V.s), Y&G 5.2
•
HW 5 due Tue, Oct 20 at 5pm; Appl. Assignment 3 due same
day (at midnight). I have OH today 13.
•
By the end of the lecture today, we will have covered all of
Chapter 4 except for 4.11 (Bivariate Gaussian) (and the Mat
lab section, which we won’t cover).
•
Required: Watch three videos on youtube (16 min total) prior
to Tue Oct 20 (Posted on web and WebCT.) We will spend
at least 30 min in class starting HW 6 and answering other
questions.
0.1
Expectation Review
Short “quiz”. Given r.v.s.
X
1
and
X
2
, what is
1. What is Var [
X
1
+
X
2
]?
2. What is the definition of Cov (
X
1
, X
2
)?
3. What do we call two r.v.s with zero covariance?
4. What is the definition of correlation coefficient?
Note:
We often define several random variables to be independent,
and to have identical distributions (CDF or pdf or pmf). We ab
breviate “i.i.d.” for “independent and identically distributed”.
1
Transformations of Joint r.v.s
Random variables are often a function of multiple other random
variables. The example the book uses is a good one, of a multiple
antenna receiver. How do you choose from the antenna signals?
1. Just choose the best one: This uses the max(
X
1
, X
2
) function.
2. Add them together;
X
1
+
X
2
. ‘Combining’.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
ECE 5510 Fall 2009
2
Figure 1: A function
Y
of two random variables
Y
=
g
(
X
1
, X
2
),
might be viewed as a 3D map of what value
Y
takes for any given
input coordinate (
X
1
, X
2
), like this topology map of Black Moun
tain, Utah. Contour lines give
Y
=
y
, for many values of
y
, which
is useful to find the preimage. One preimage of importance is the
coordinates (
X
1
, X
2
) for which
Y
≤
y
.
3. Add them in some ratio:
X
1
/σ
1
+
X
2
/σ
2
.
‘Maximal Ratio
Combining’.
We may have more than one output: we’ll have
Y
1
=
aX
1
+
bX
2
and
Y
2
=
cX
1
+
dX
2
, where
a
,
b
,
c
,
d
, are constants. If we choose
wisely to match to the losses in the channel, we won’t loose any of
the information that is contained in
X
1
and
X
2
. Ideas that exploit
This is the end of the preview.
Sign up
to
access the rest of the document.
 Fall '08
 Chen,R
 Probability theory, CDF, X1

Click to edit the document details