Often e g y is found using tricks tailored for a

This preview shows page 24 - 27 out of 465 pages.

Often E [ g ( Y )] is found using “tricks” tailored for a specific distribution. The word “kernel” means “essential part.” Notice that if f Y ( y ) is a pdf, then E [ g ( Y )] = integraltext -∞ g ( y ) f ( y | θ ) dy = integraltext Y g ( y ) f ( y | θ ) dy. Suppose that after algebra, it is found that E [ g ( Y )] = ac ( θ ) integraldisplay -∞ k ( y | τ ) dy for some constant a where τ Θ and Θ is the parameter space. Then the kernel method says that E [ g ( Y )] = ac ( θ ) integraldisplay -∞ c ( τ ) c ( τ ) k ( y | τ ) dy = ac ( θ ) c ( τ ) integraldisplay -∞ c ( τ ) k ( y | τ ) dy bracehtipupleft bracehtipdownrightbracehtipdownleft bracehtipupright 1 = ac ( θ ) c ( τ ) . Similarly, if f Y ( y ) is a pmf, then E [ g ( Y )] = summationdisplay y : f ( y ) > 0 g ( y ) f ( y | θ ) = summationdisplay y ∈Y g ( y ) f ( y | θ ) where Y = { y : f Y ( y ) > 0 } is the support of Y . Suppose that after algebra, it is found that E [ g ( Y )] = ac ( θ ) summationdisplay y ∈Y k ( y | τ )
CHAPTER 1. PROBABILITY AND EXPECTATIONS 14 for some constant a where τ Θ . Then the kernel method says that E [ g ( Y )] = ac ( θ ) summationdisplay y ∈Y c ( τ ) c ( τ ) k ( y | τ ) = ac ( θ ) c ( τ ) summationdisplay y ∈Y c ( τ ) k ( y | τ ) bracehtipupleft bracehtipdownrightbracehtipdownleft bracehtipupright 1 = ac ( θ ) c ( τ ) . The kernel method is often useful for finding E [ g ( Y )], especially if g ( y ) = y,g ( y ) = y 2 or g ( y ) = e ty . The kernel method is often easier than memorizing a trick specific to a distribution because the kernel method uses the same trick for every distribution: y ∈Y f ( y ) = 1 and integraltext y ∈Y f ( y ) dy = 1 . Of course sometimes tricks are needed to get the kernel f ( y | τ ) from g ( y ) f ( y | θ ) . For example, complete the square for the normal (Gaussian) kernel. Example 1.10. To use the kernel method to find the mgf of a gamma ( ν,λ ) distribution, refer to Chapter 10 and note that m ( t ) = E ( e tY ) = integraldisplay 0 e ty y ν - 1 e - y/λ λ ν Γ( ν ) dy = 1 λ ν Γ( ν ) integraldisplay 0 y ν - 1 exp[ - y ( 1 λ - t )] dy. The integrand is the kernel of a gamma ( ν,η ) distribution with 1 η = 1 λ - t = 1 - λt λ so η = λ 1 - λ t . Now integraldisplay 0 y ν - 1 e - y/λ dy = 1 c ( ν,λ ) = λ ν Γ( ν ) . Hence m ( t ) = 1 λ ν Γ( ν ) integraldisplay 0 y ν - 1 exp[ - y/η ] dy = c ( ν,λ ) 1 c ( ν,η ) = 1 λ ν Γ( ν ) η ν Γ( ν ) = parenleftBig η λ parenrightBig ν = parenleftbigg 1 1 - λt parenrightbigg ν for t< 1 /λ. Example 1.11. The zeta( ν ) distribution has probability mass function f ( y ) = P ( Y = y ) = 1 ζ ( ν ) y ν
CHAPTER 1. PROBABILITY AND EXPECTATIONS 15 where ν> 1 and y = 1 , 2 , 3 , .... Here the zeta function ζ ( ν ) = summationdisplay y =1 1 y ν for ν> 1 . Hence E ( Y ) = summationdisplay y =1 y 1 ζ ( ν ) 1 y ν = 1 ζ ( ν ) ζ ( ν - 1) summationdisplay y =1 1 ζ ( ν - 1) 1 y ν - 1 bracehtipupleft bracehtipdownrightbracehtipdownleft bracehtipupright 1= sum of zeta ( ν - 1) pmf = ζ ( ν - 1) ζ ( ν ) if ν> 2 . Similarly E ( Y k ) = summationdisplay y =1 y k 1 ζ ( ν ) 1 y ν = 1 ζ ( ν ) ζ ( ν - k ) summationdisplay y =1 1 ζ ( ν - k ) 1 y ν - k bracehtipupleft bracehtipdownrightbracehtipdownleft bracehtipupright 1= sum of zeta ( ν - k ) pmf = ζ ( ν - k ) ζ ( ν ) if ν - k> 1 or ν>k + 1 . Thus if ν> 3, then V ( Y ) = E ( Y 2 ) - [ E ( Y )] 2 = ζ ( ν - 2) ζ ( ν ) - bracketleftbigg ζ ( ν - 1) ζ ( ν ) bracketrightbigg 2 .

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture