This preview shows page 1. Sign up to view the full content.
Unformatted text preview: EE278: Review Session # 5
HanI Su May 4, 2009 HanI Su () EE278: Review Session # 5 May 4, 2009 1 / 12 Outline
1 Basic Probability Detection and Estimation MMSE Linear Estimate Functions of random variables Inequalities Homework Hints 2 3 4 5 6 HanI Su () EE278: Review Session # 5 May 4, 2009 2 / 12 Basic Probability
Some properties of events, discrete random variables (PMF), continuous random varialbes (PDF), and mixed random variables are similar. Law of total probability
Events: P(B ) = i P(Ai ∩ B ) if Ai ’s partition Ω PMF: pX (x ) = y pXY (x , y ) PDF: fX (x ) = fXY (x , y )dy Mixed: fY (y ) =
θ pΘ (θ )fY Θ (y θ ) fY (y )pΘY (θ y )dy pΘ (θ ) = HanI Su () EE278: Review Session # 5 May 4, 2009 3 / 12 Bayes rule
Event: P(Aj B ) = Mixed: P(B Aj ) P(Aj ) if Ai ’s partition Ω i P(B Ai ) P(Ai ) fY Θ (y θ ) pΘ (θ ) fY Θ (y θ )pΘ (θ ) pΘY (θ y ) =
θ fY Θ (y θ ) = pΘY (θ y ) fY (y ) pΘY (θ y )fY (y )dy The denominator is obtained from the law of total probability An explanation: pΘY (θ y ) is a PMF over Θ, which is proportional to fY Θ (y θ )pΘ (θ ) with normalization constant fY (y ) HanI Su () EE278: Review Session # 5 May 4, 2009 4 / 12 Detection and Estimation
Channel pY X or fY X Y Decoder/ Estimator X ˆ X (Y ) Detection
X is discrete Given Y = y , ﬁnd decision rule D (y ) to minimize the probability of error MSE Estimation
X can be discrete or continuous ˆ Given Y = y , ﬁnd estimate X (y ) to minimize MSE HanI Su () EE278: Review Session # 5 May 4, 2009 5 / 12 Example (discrete X )
X and Z are independent, and Y = X + Z . pX (x ) = 1/3 for x = x0 , x1, x2 and fZ (z ) = (1/2)e −z 
fY X (y x )pX (x ) x1 x0 y x2 Plotting fY X (y x )pX (x ) helps ﬁnding decision rules or computing estimates In general the signal distribution pX (x ) and the channel fY X (y x ) are given or easier to be found The given Y = y gives us three intersections fY X (y x0)pX (x0 ), fY X (y x1)pX (x1 ), and fY X (y x2)pX (x2 )
HanI Su () EE278: Review Session # 5 May 4, 2009 6 / 12 Detection
Given Y = y , the probability of sending xi is proportional to fY X (y xi )pX (xi ) Since fY X (y x2 )pX (x2 ) is greater than the other two, D (y ) = x2 MSE Estimation
The MMSE estimate is E(X Y ) and the MMSE is E(Var(X Y )) HanI Su () EE278: Review Session # 5 May 4, 2009 7 / 12 Detection
Given Y = y , the probability of sending xi is proportional to fY X (y xi )pX (xi ) Since fY X (y x2 )pX (x2 ) is greater than the other two, D (y ) = x2 MSE Estimation
The MMSE estimate is E(X Y ) and the MMSE is E(Var(X Y )) How do we compute E(X Y )? HanI Su () EE278: Review Session # 5 May 4, 2009 7 / 12 Detection
Given Y = y , the probability of sending xi is proportional to fY X (y xi )pX (xi ) Since fY X (y x2 )pX (x2 ) is greater than the other two, D (y ) = x2 MSE Estimation
The MMSE estimate is E(X Y ) and the MMSE is E(Var(X Y )) How do we compute E(X Y )?
1 By deﬁnition of conditional expectation, we need to ﬁnd pX Y (x y ), which is a PMF for X HanI Su () EE278: Review Session # 5 May 4, 2009 7 / 12 Detection
Given Y = y , the probability of sending xi is proportional to fY X (y xi )pX (xi ) Since fY X (y x2 )pX (x2 ) is greater than the other two, D (y ) = x2 MSE Estimation
The MMSE estimate is E(X Y ) and the MMSE is E(Var(X Y )) How do we compute E(X Y )?
1 2 By deﬁnition of conditional expectation, we need to ﬁnd pX Y (x y ), which is a PMF for X Since fY X (y x ) is easier to ﬁnd, we need the Bayes rule. Recall that pX Y (xi y ) is proportional to fY X (y xi )pX (xi ) with normalization HanI Su () EE278: Review Session # 5 May 4, 2009 7 / 12 Detection
Given Y = y , the probability of sending xi is proportional to fY X (y xi )pX (xi ) Since fY X (y x2 )pX (x2 ) is greater than the other two, D (y ) = x2 MSE Estimation
The MMSE estimate is E(X Y ) and the MMSE is E(Var(X Y )) How do we compute E(X Y )?
1 2 3 By deﬁnition of conditional expectation, we need to ﬁnd pX Y (x y ), which is a PMF for X Since fY X (y x ) is easier to ﬁnd, we need the Bayes rule. Recall that pX Y (xi y ) is proportional to fY X (y xi )pX (xi ) with normalization Fix some Y = y at which we know explicitly fY X (y xi )pX (xi ). In the example, z = y − x > 0 and z < 0 correspond to (1/2)e −(y −x ) and (1/2)e y −x HanI Su () EE278: Review Session # 5 May 4, 2009 7 / 12 Detection
Given Y = y , the probability of sending xi is proportional to fY X (y xi )pX (xi ) Since fY X (y x2 )pX (x2 ) is greater than the other two, D (y ) = x2 MSE Estimation
The MMSE estimate is E(X Y ) and the MMSE is E(Var(X Y )) How do we compute E(X Y )?
1 2 3 4 By deﬁnition of conditional expectation, we need to ﬁnd pX Y (x y ), which is a PMF for X Since fY X (y x ) is easier to ﬁnd, we need the Bayes rule. Recall that pX Y (xi y ) is proportional to fY X (y xi )pX (xi ) with normalization Fix some Y = y at which we know explicitly fY X (y xi )pX (xi ). In the example, z = y − x > 0 and z < 0 correspond to (1/2)e −(y −x ) and (1/2)e y −x If fY X (y x0 )pX (x0 ) = 0.2, fY X (y x1 )pX (x1 ) = 0.1, fY X (y x2 )pX (x2 ) = 0.5, then E(X Y ) = 0.2x0 + 0.1x1 + 0.5x2 0.2 + 0.1 + 0.5
May 4, 2009 7 / 12 HanI Su () EE278: Review Session # 5 MMSE Linear Estimate
The MMSE linear estimate of X given Y is ˆ = Cov(X , Y ) (Y − E(Y )) + E (X ) X 2 σY and its MSE is
2 MSE = (1 − ρ2 )σX XY We only need to know the ﬁrst and second moments of X and Y . In general, computing moments is easier than ﬁnding distributions HanI Su () EE278: Review Session # 5 May 4, 2009 8 / 12 Functions of Random Variables
The distribution of X is given and Y = g (X ) Discrete: pX (x ) pY (y ) =
x :g (x )=y Continuous: f Y (y ) =
x :g (x )=y f X (x )  g (x ) CDF: FY (y ) = P{Y ≤ y } = P{X : g (X ) ≤ y } Note that pY (y ) and fY (y ) should be in terms of y instead of x Application: Y = FX (x ) is U[0, 1] HanI Su () EE278: Review Session # 5 May 4, 2009 9 / 12 Inequalities
Union of events: P(
i =1 n n Ai ) ≤
i =1 P(Ai ) CaichySchwarz: (E(XY ))2 ≤ E(X 2 ) E(Y 2 ); equality iﬀ X = aY Jensen: if g (x ) is convex, then E(g (X )) ≥ g (E(X )) Markov: if X ≥ 0 and a > 1, then 1 P{X ≥ a E(X )} ≤ a Chebyshev: 1 P{X − E (X ) ≥ aσX } ≤ 2 a
EE278: Review Session # 5 May 4, 2009 10 / 12 HanI Su () Homework Hints
Problem 1: Uncorrelated: E (XZ ) = E (X )E (Z ) Problem 2: Since we know the distribution of Z X = x , compute the expectations associated with Z via iterative expectations. For example, E(XZ ) = EX [E(XZ X )] = EX [X E(Z X )] Problem 3: The linear MMSE is an upper bound on MMSE. The bound is tight if the MMSE estimate is the same as MMSE linear estimate. HanI Su () EE278: Review Session # 5 May 4, 2009 11 / 12 Problem 4: Let Ai = {i < X ≤ i + 1} for i = −k , . . . , k − 1. Then
k −1 k −1 ˜ E(X X ) =
i =−k ˜ E(X X Ai ) P(Ai ) =
i =−k E X 1 i+ 2 Ai P(Ai ) You will also need k=1 i 2 = k (k + 1)(2k + 1)/6 i Problem 5: E (X Y ) is Gaussian, so the distribution is determined by the mean and variance. E (X Y ) is also the MMSE linear estimate aY + b, and thus we can ﬁnd the mean and variance For part b, E(Y 2 X = x ) = (E(Y X = x ))2 + Var(Y X = x ) HanI Su () EE278: Review Session # 5 May 4, 2009 12 / 12 ...
View
Full
Document
This note was uploaded on 04/07/2010 for the course EE 278 taught by Professor Balajiprabhakar during the Spring '09 term at Stanford.
 Spring '09
 BalajiPrabhakar

Click to edit the document details