lect03

# lect03 - Lecture Notes 3 Two Random Variables • Joint,...

This preview shows pages 1–5. Sign up to view the full content.

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Lecture Notes 3 Two Random Variables • Joint, Marginal, and Conditional PMFs • Joint, Marginal, and Conditional CDFs, PDFs • One Discrete and one Continuous Random Variables • Signal Detection: MAP Rule • Functions of Two Random Variables EE 278: Two Random Variables 3 – 1 Joint, Marginal, and Conditional PMFs • Let X and Y be discrete random variables on the same probability space • They are completely specified by their joint pmf : p X,Y ( x, y ) = P { X = x, Y = y } , x ∈ X , y ∈ Y By axioms of probability, x ∈X y ∈Y p X,Y ( x, y ) = 1 • Example: Consider the pmf p ( x, y ) described by the following table x 1 2 . 5- 3 1 4 1 8 y- 1 1 8 1 4 2 1 8 1 8 EE 278: Two Random Variables 3 – 2 • To find p X ( x ) , the marginal pmf of X , we use the law of total probability p X ( x ) = y ∈Y p ( x, y ) , x ∈ X • The conditional pmf of X given Y = y is defined as p X | Y ( x | y ) = p X,Y ( x, y ) p Y ( y ) , p Y ( y ) = 0 , x ∈ X Check that if p Y ( y ) = 0 then p X | Y ( x | y ) is a pmf for X • Chain rule : p X,Y ( x, y ) = p X ( x ) p Y | X ( y | x ) = p Y ( y ) p X | Y ( x | y ) • X and Y are said to be independent if for every ( x, y ) ∈ X ×Y , p X,Y ( x, y ) = p X ( x ) p Y ( y ) , which is equivalent to p X | Y ( x | y ) = p X ( x ) , p Y ( y ) = 0 , x ∈ X EE 278: Two Random Variables 3 – 3 Bayes Rule for PMFs Given p X ( x ) and p Y | X ( y | x ) for every ( x, y ) ∈ X ×Y , we can find p X | Y ( x | y ) : p X | Y ( x | y ) = p X,Y ( x, y ) p Y ( y ) = p X ( x ) p Y | X ( y | x ) p Y ( y ) = p Y | X ( y | x ) ∑ x ∈X p X,Y ( x , y ) p X ( x ) = p Y | X ( y | x ) ∑ x ∈X p Y | X ( y | x ) p X ( x ) p X ( x ) The final formula is entirely in terms of the known quantities p X ( x ) and p Y | X ( y | x ) EE 278: Two Random Variables 3 – 4 Example: Binary Symmetric Channel Consider the following binary communication channel X ∈ { , 1 } Y ∈ { , 1 } Z ∈ { , 1 } The bit sent is X ∼ Bern( p ) , ≤ p ≤ 1 , the noise is Z ∼ Bern( ) , ≤ ≤ . 5 , the bit received is Y = ( X + Z ) mod 2 = X ⊕ Z , and X and Z are independent Find 1. p X | Y ( x | y ) 2. p Y ( y ) 3. P { X = Y } , the probability of error EE 278: Two Random Variables 3 – 5 1. To find p X | Y ( x | y ) we use Bayes rule p X | Y ( x | y ) = p Y | X ( y | x ) ∑ x ∈X p Y | X ( y | x ) p X ( x ) p X ( x ) We know p X ( x ) , but we need to find p Y | X ( y | x ) : p Y | X ( y | x ) = P { Y = y | X = x } = P { X ⊕ Z = y | X = x } = P { x ⊕ Z = y | X = x } = P { Z = y ⊕ x | X = x } = P { Z = y ⊕ x } since Z and X are independent = p Z ( y ⊕ x ) Therefore p Y | X (0 | 0) = p Z (0 ⊕ 0) = p Z (0) = 1- p Y | X (0 | 1) = p Z (0 ⊕ 1) = p Z (1) = p Y | X (1 | 0) = p Z (1 ⊕ 0) = p Z (1) = p Y | X (1 | 1) = p Z (1 ⊕ 1) = p Z (0) = 1- EE 278: Two Random Variables 3 – 6 Plugging into the Bayes rule equation, we obtain p X | Y (0 | 0) = p Y | X (0 | 0) p Y | X (0 | 0) p X (0) + p Y | X (0 | 1) p X (1) p X (0) = (1...
View Full Document

## This note was uploaded on 11/28/2009 for the course EE 278 taught by Professor Balajiprabhakar during the Fall '09 term at Stanford.

### Page1 / 17

lect03 - Lecture Notes 3 Two Random Variables • Joint,...

This preview shows document pages 1 - 5. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online