This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture Notes 4 Expectation • Definition and Properties • Mean and Variance • Markov and Chebyshev Inequalities • Covariance and Correlation • Conditional Expectation EE 278: Expectation 4 – 1 Expectation • Let X ∈ X be a discrete r.v. with pmf p X ( x ) and let g ( x ) be a function of x . The expectation (or expected value or mean ) of g ( X ) can be defined as E( g ( X )) = X x ∈X g ( x ) p X ( x ) • For a continuous r.v. X ∼ f X ( x ) , the expected value of g ( X ) can be defined as E( g ( X )) = Z ∞∞ g ( x ) f X ( x ) dx • Properties of expectation: ◦ If c is a constant, then E( c ) = c ◦ Expectation is linear , i.e., for any constant a E[ ag 1 ( X ) + g 2 ( X )] = a E( g 1 ( X )) + E( g 2 ( X )) EE 278: Expectation 4 – 2 ◦ Fundamental Theorem of Expectation: If Y = g ( X ) ∼ p Y ( y ) , then E( Y ) = X y ∈Y yp Y ( y ) = X x ∈X g ( x ) p X ( x ) = E( g ( X )) The corresponding formula for f Y ( y ) uses integrals instead of sums: E( Y ) = Z ∞∞ yf Y ( y ) dy Proof: We prove the theorem for discrete r.v.s. Consider E( Y ) = X y yp Y ( y ) = X y y X { x : g ( x )= y } p X ( x ) = X y X { x : g ( x )= y } yp X ( x ) = X y X { x : g ( x )= y } g ( x ) p X ( x ) = X x g ( x ) p X ( x ) Thus E( Y ) = E( g ( X )) can be found using either f X ( x ) or f Y ( y ) It is often much easier to use f X ( x ) than to first find f Y ( y ) then find E( Y ) EE 278: Expectation 4 – 3 • Remark: We know that a r.v. is completely specified by its cdf (pdf, pmf), so why do we need expectation? ◦ Expectation provides a summary or an estimate of the r.v. — a single number — instead of specifying the entire distribution. ◦ It is far easier to estimate the expectation of a r.v. from data than to estimate its distribution ◦ Expectation can be used to bound or estimate probabilities of interesting events (as we shall see) EE 278: Expectation 4 – 4 Mean and Variance • The first moment (or mean ) of X ∼ f X ( x ) is the expectation E( X ) = Z ∞∞ xf X ( x ) dx • The second moment (or mean square or average power ) of X is E( X 2 ) = Z ∞∞ x 2 f X ( x ) dx • The variance of X is Var( X ) = E ( X E( X )) 2 = E X 2 2 X E( X ) + (E( X )) 2 = E( X 2 ) 2(E( X )) 2 + (E( X )) 2 = E( X 2 ) (E( X )) 2 • The standard deviation of X is defined as σ X = p Var( X ) , i.e., Var( X ) = σ 2 EE 278: Expectation 4 – 5 Mean and Variance for Famous R.V.s Random Variable Mean Variance Bern( p ) p p (1 p ) Geom( p ) 1 p 1 p p 2 Binom( n, p ) np np (1 p ) Poisson( λ ) λ λ U[ a, b ] a + b 2 ( b a ) 2 12 Exp( λ ) 1 λ 1 λ 2 N ( μ, σ 2 ) μ σ 2 EE 278: Expectation 4 – 6 Expectation Can Be Infinite or May Not Exist • Expectation can be infinite. For example f X ( x ) = ( 1 /x 2 1 ≤ x < ∞ otherwise ⇒ E( X ) = Z ∞ 1 x/x 2 dx = ∞ • Expectation may not exist. To find conditions for expectation to exist, consider E( X ) = Z ∞∞ xf X ( x ) dx = Z∞  x  f X ( x ) dx + Z ∞  x  f...
View
Full
Document
This note was uploaded on 04/07/2010 for the course EE 278 taught by Professor Balajiprabhakar during the Spring '09 term at Stanford.
 Spring '09
 BalajiPrabhakar

Click to edit the document details