{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

MIT14_30s09_lec22

# MIT14_30s09_lec22 - MIT OpenCourseWare http/ocw.mit.edu...

This preview shows pages 1–4. Sign up to view the full content.

MIT OpenCourseWare http://ocw.mit.edu 14.30 Introduction to Statistical Methods in Economics Spring 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
± ² 14.30 Introduction to Statistical Methods in Economics Lecture Notes 22 Konrad Menzel May 7, 2009 Proposition 1 (Neyman-Pearson Lemma) In testing f 0 against f A (where both H 0 and H A are simple hypotheses), the critical region f 0 ( x ) C ( k ) = x : < k f A ( x ) is most powerful for any choice of k 0 . Note that the choice of k depends on the speciFed signiFcance level α of the test. This means that the most powerful test rejects if for the sample X 1 ,...,X n , the likelihood ratio f 0 ( X 1 n ) r ( X 1 n ) = f A ( X 1 n ) is low, i.e. the data is much more likely to have been generated under H A . Reject F o (x) F A (x) C x (k) r(x) = k A o The most powerful test given in the Neyman-Pearson Lemma Image by MIT OpenCourseWare. explicitly solves the trade-oﬀ between size ² α = P (reject H 0 ) = f 0 ( x ) d x | C ( k ) and power 1 β = P (reject | H A ) = f A ( x ) d x C ( k ) 1
at every point x in the sample space (where the integrals are over many dimensions, e.g. typically f 0 ( x ) x R n ). From the expressions for α and 1 β we can see that the likelihood ratio f A ( x ) gives the ”price” of including x with the critical region in terms of how much we ”pay” in terms of size α relative to the gain in power from including the point in the critical region C X . Therefore, we should start constructing the critical region by including the ”cheapest” points x - i.e. those with a small likelihood ratio. Then we can go down the list of x ordered according to the likelihood ratio and continue including more points until the size α of the test is down to the desired level. Example 1 A criminal defendant (D) is on trial for a purse snatching. In order to convict, the jury must believe that there is a 95% chance that the charge is true. There are three potential pieces of evidence the prosecutor may or may not have been able to produce, and in a given case the jury takes a decision to convict based only on which out of the three clues it is presented with. Below are the potential pieces of evidence, assumed to be mutually independent, the probability of observing each piece given the defendant is guilty, and the probability of observing each piece given the defendant is not guilty guilty not guilty likelihood ratio 1. D ran when he saw police coming 0.6 0.3 1/2 2. D has no alibi 0.9 0.3 1/3 3. Empty purse found near D’s home 0.4 0.1 1/4 In the notation of the Neyman-Pearson Lemma, x can be any of the 2 3 possible combinations of pieces of evidence. Using the assumption of independence, we can therefore list all possible combinations of clues with their respective likelihood under each hypothesis and the likelihood ratio. I already ordered the list by the likelihood ratios in the third column. In the last column, I added α ( k ) = f 0 ( x ) r ( x ) k the cumulative sum over the ordered list of combinations x .

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}

### Page1 / 7

MIT14_30s09_lec22 - MIT OpenCourseWare http/ocw.mit.edu...

This preview shows document pages 1 - 4. Sign up to view the full document.

View Full Document
Ask a homework question - tutors are online