This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 24 24.1 Goodnessoffit test. Suppose that we observe an i.i.d. sample X 1 , . . . , X n of random variables that can take a finite number of values B 1 , . . . , B r with some unknown to us probabilities p 1 , . . . , p r . Suppose that we have a theory (or a guess) that these probabilities are equal to some particular p ◦ 1 , . . . , p ◦ r and we want to test it. This means that we want to test the hypotheses H 1 : p i = p ◦ i for all i = 1 , . . . , r, H 2 : otherwise, i.e. for some i, p i 6 = p ◦ i . If the first hypothesis is true than the main result from previous lecture tells us that we have the following convergence in distribution: T = r X i =1 ( ν i np ◦ i ) 2 np ◦ i → χ 2 r 1 where ν i = # { X j : X j = B i } . On the other hand, if H 2 holds then for some index i , p i 6 = p ◦ i and the statistics T will behave very differently. If p i is the true probability ( X 1 = B i ) then by CLT (see previous lecture) ν i np i √ np i → N (0 , 1 p i ) . If we write ν i np ◦ i √ np ◦ i = ν i np i + n ( p i p ◦ i ) √ np ◦ i = ν i np i √ np i + √ n p i p ◦ i √ p ◦ i then the first term converges to N (0 , 1 p i ) but the second term converges to plus or minus ∞ since p i 6 = p ◦ i . Therefore, ( ν i np ◦ i ) 2 np ◦ i → + ∞ 94 LECTURE 24.LECTURE 24....
View
Full Document
 Spring '09
 DmitryPanchenko
 Statistics, #, two degrees, r1, Hypothesis H1

Click to edit the document details