This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Lecture 23 23.1 Pearson’s theorem. Today we will prove one result from probability that will be useful in several statistical tests. Let us consider r boxes B 1 , . . . , B r as in figure 23.1 ... B1 B2 Br Figure 23.1: Assume that we throw n balls X 1 , . . . , X n into these boxes randomly independently of each other with probabilities ( X i ∈ B 1 ) = p 1 , . . . , ( X i ∈ B r ) = p r , where probabilities add up to one p 1 + . . . + p r = 1 . Let ν j be a number of balls in the j th box: ν j = # { balls X 1 , . . . , X n in the box B j } = n X l =1 I ( X l ∈ B j ) . On average, the number of balls in the j th box will be np j , so random variable ν j should be close to np j . One can also use Central Limit Theorem to describe how close ν j is to np j . The next result tells us how we can describe in some sense the closeness of ν j to np j simultaneously for all j ≤ r. The main difficulty in this Thorem comes from the fact that random variables ν j for j ≤ r are not independent, for example, because the total number of balls is equal to n, ν 1 + . . . + ν r = n, 89 LECTURE 23. 90 i.e. if we know these numbers in n 1 boxes we will automatically know their number in the last box. Theorem. We have that the random variable r X j =1 ( ν j np j ) 2 np j → χ 2 r 1 converges in distribution to χ 2 r 1 distribution with ( r 1) degrees of freedom....
View
Full
Document
This note was uploaded on 10/11/2009 for the course STATISTICS 18.443 taught by Professor Dmitrypanchenko during the Spring '09 term at MIT.
 Spring '09
 DmitryPanchenko
 Statistics, Probability

Click to edit the document details