If we have two composite hypotheses H \u03b8 \u2126 and H 1 \u03b8 \u2126 \u2126 then a prior

If we have two composite hypotheses h θ ω and h 1

This preview shows page 137 - 141 out of 164 pages.

If we have two composite hypotheses H 0 : θ 0 and H 1 : θ 0 then a prior distribution for θ must be speci fi ed for each hypothesis. We denote these by π 0 ( θ | H 0 ) and π 1 ( θ | H 1 ). In this case the posterior odds are P ( H 0 | x ) P ( H 1 | x ) = P ( H 0 ) P ( H 1 ) · B where B is the Bayes factor given by B = R 0 f ( x ; θ ) π 0 ( θ | H 0 ) d θ R 0 f ( x ; θ ) π 1 ( θ | H 1 ) d θ . For the hypotheses H 0 : θ = θ 0 and H 1 : θ 6 = θ 0 the Bayes factor is B = f θ 0 ( x ) R θ 6 = θ 0 f ( x ; θ ) π 1 ( θ | H 1 ) d θ . 4.6.1 Problem Suppose ( X 1 , . . . , X n ) is a random sample from a POI( θ ) distribution and we wish to test H 0 : θ = θ 0 against H 1 : θ 6 = θ 0 . Find the Bayes factor if under H 1 the prior distribution for θ is the conjugate prior.
Chapter 5 Appendix 5.1 Inequalities and Useful Results 5.1.1 H¨older’s Inequality Suppose X and Y are random variables and p and q are positive numbers satisfying 1 p + 1 q = 1 . Then | E ( XY ) | E ( | XY | ) [ E ( | X | p )] 1 /p [ E ( | Y | q )] 1 /q . Letting Y = 1 we have E ( | X | ) [ E ( | X | p )] 1 /p , p > 1 . 5.1.2 Covariance Inequality If X and Y are random variables with variances σ 2 1 and σ 2 2 respectively then [ Cov ( X, Y )] 2 σ 2 1 σ 2 2 . 5.1.3 Chebyshev’s Inequality If X is a random variable with E ( X ) = μ and V ar ( X ) = σ 2 < then P ( | X μ | k ) σ 2 k 2 . for any k > 0. 135
136 CHAPTER 5. APPENDIX 5.1.4 Jensen’s Inequality If X is a random variable and g ( x ) is a convex function then E [ g ( X )] g [ E ( X )] . 5.1.5 Corollary If X is a non-degenerate random variable and g ( x ) is a strictly convex function. Then E [ g ( X )] > g [ E ( X )] . 5.1.6 Stirling’s Formula For large n Γ ( n + 1) 2 π n n +1 / 2 e n . 5.1.7 Matrix Di ff erentiation Suppose x = ( x 1 , . . . , x k ) T , b = ( b 1 , . . . , b k ) T and A is a k × k symmetric matrix. Then x ¡ x T b ¢ = x 1 ¡ x T b ¢ , . . . , x k ¡ x T b ¢ ¸ T = b and x ¡ x T Ax ¢ = x 1 ¡ x T Ax ¢ , . . . , x k ¡ x T Ax ¢ ¸ T = 2 Ax. 5.2 Distributional Results 5.2.1 Functions of Random Variables Univariate One-to-One Transformation Suppose X is a continuous random variable with p.d.f. f ( x ) and support set A . Let Y = h ( X ) be a real-valued, one-to-one function from A to B . Then the probability density function of Y is g ( y ) = f ¡ h 1 ( y ) ¢ · ¯ ¯ ¯ ¯ d dy h 1 ( y ) ¯ ¯ ¯ ¯ , y B.
5.2. DISTRIBUTIONAL RESULTS 137 Multivariate One-to-One Transformation Suppose ( X 1 , . . . , X n ) is a vector of random variables with joint p.d.f. f ( x 1 , . . . , x n ) and support set R X . Suppose the transformation S de fi ned by U i = h i ( X 1 , . . . , X n ) , i = 1 , . . . , n is a one-to-one, real-valued transformation with inverse transformation X i = w i ( U 1 , . . . , U n ) , i = 1 , . . . , n. Suppose also that S maps R X into R U . Then g ( u 1 , . . . , u n ) , the joint p.d.f. of ( U 1 , . . . , U n ) , is given by g ( u ) = f ( w 1 ( u ) , . . . , w n ( u )) ¯ ¯ ¯ ¯ ( x 1 , . . . , x n ) ( u 1 , . . . , u n ) ¯ ¯ ¯ ¯ , ( u 1 , . . . , u n ) R U where ( x 1 , . . . , x n ) ( u 1 , . . . , u n ) = ¯ ¯ ¯ ¯ ¯ ¯ ¯ x 1 u 1 · · · x 1 u n .

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture