ece830_fall11_lecture10.pdf

This ratio compares the average model in h 1 with

Info icon This preview shows pages 7–8. Sign up to view the full content.

This ratio compares the average model in H 1 (with respect to the prior p ( μ )) with the H 0 model. The integral in the numerator is easy to compute. Note that X = μ + W , where W ∼ N (0 , 1). So X is the sum of two independent Gaussian random variables, and its distribution is X N (0 , 1 + σ 2 ). The Bayes Factor is therefore Λ BF ( x ) = 1 2 π (1+ σ 2 ) exp( - x 2 2(1+ σ 2 ) ) 1 2 π exp( - x 2 / 2) . Taking the log and absorbing constant factors and terms into the threshold yields the test x 2 H 1 H 0 γ , which again is equivalent to the Wald test. 1.5 GLRT and Bayes Factors Consider a composite hypothesis test of the form H 0 : X p 0 ( x | θ 0 ) , θ 0 Θ 0 H 1 : X p 1 ( x | θ 1 ) , θ 1 Θ 1 The general forms for the GLRT and Bayes Factor are as follows. GLRT max θ 1 Θ 1 p 1 ( x | θ 1 ) max θ 0 Θ 0 p 0 ( x | θ 0 ) H 1 H 0 γ . Bayes Factor Assume θ 0 p ( θ 0 ) and θ 1 p ( θ 1 ), two different prior probability distributions. The Bayes Factor is R Θ 1 p 1 ( x | θ 1 ) p ( θ 1 ) 1 R Θ 0 p 0 ( x | θ 0 ) p ( θ 0 ) 0 . The GLRT compares the best model in H 1 to the best in H 0 , and the Bayes Factor compares the average model in H 1 to the average model in H 0 , with respect to the specified prior probability distributions. Example 7 H 0 : N (0 , σ 2 I ) , σ 2 > 0 known , θ R k unknown H 1 : N ( Hθ, σ 2 I ) , H = [ h 1 , h 2 , · · · , h k ] n-by-k known
Image of page 7

Info icon This preview has intentionally blurred sections. Sign up to view the full version.

Lecture 10: Composite Hypothesis Testing 8 log LR Λ( x ) = - 1 2 σ 2 ( ( x - ) T ( x - ) - x T x ) = - 1 σ 2 ( - 2 θ T H T x + θ T H T ) H 1 H 0 γ 0 θ T H T H 1 H 0 γ (not computable w/o knowledge of θ ) Recall that H 1 : x ∼ N ( Hθ, σ 2 I ) , θ R k or x p 1 , p 1 ∈ {N ( Hθ, σ 2 I ) } θ R k , we want to pick p 1 in {N ( Hθ, σ 2 I ) } that matches x the best. p ( x | θ, H 1 ) = 1 (2 πσ 2 ) n/ 2 exp( - 1 2 σ 2 ( x - ) T ( x - )) Find θ that maximizes prob. of observing x ˆ θ = arg min θ R k ( x - ) T ( x - ) | {z } || x - || 2 = ( H T H ) - 1 H T x Plugging ˆ θ into the test statistic θ T H T x , we have ˆ θ T H T x = x T H ( H T H ) - 1 H T x H 1 H 0 γ ∼ X 2 k This is the so-called Generalized LRT (GLRT) and its distribution is chi-square with k degrees of freedom ( k being the dimension of the subspace spanned by the columns of H ). This distribution is denoted X 2 k . In lecture we also showed that using the prior θ ∼ N (0 , α 2 I ) and computing the Bayes Factor yields the same test. The test computes the energy in the signal subspace and if the energy is large enough, then H 1 is accepted.
Image of page 8
This is the end of the preview. Sign up to access the rest of the document.
  • Spring '17
  • KASMIS MISGANEW
  • Statistical hypothesis testing, Likelihood function, Likelihood-ratio test, Bayes factor

{[ snackBarMessage ]}

What students are saying

  • Left Quote Icon

    As a current student on this bumpy collegiate pathway, I stumbled upon Course Hero, where I can find study resources for nearly all my courses, get online help from tutors 24/7, and even share my old projects, papers, and lecture notes with other students.

    Student Picture

    Kiran Temple University Fox School of Business ‘17, Course Hero Intern

  • Left Quote Icon

    I cannot even describe how much Course Hero helped me this summer. It’s truly become something I can always rely on and help me. In the end, I was not only able to survive summer classes, but I was able to thrive thanks to Course Hero.

    Student Picture

    Dana University of Pennsylvania ‘17, Course Hero Intern

  • Left Quote Icon

    The ability to access any university’s resources through Course Hero proved invaluable in my case. I was behind on Tulane coursework and actually used UCLA’s materials to help me move forward and get everything together on time.

    Student Picture

    Jill Tulane University ‘16, Course Hero Intern