This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Chapter 20: Chapter The Normal Sampling Theorem and Its The First Four Applications First
A. Revision of Point Estimations The three main areas of statistical inference are:
(i) (ii) (iii) Point estimation Interval estimation Test of Hypotheses In chapters 14 to 19 , we have dealt with some very basic problems of point estimation: very Note that we have only indicated the existence of parameters α,β and ρ corresponding to a, b, and r in the above list. corresponding to a, b, and r in the above list. We do not intend to go deeper into the matter as that goes beyond the scope of this book. From this chapter on , we shall look at interval estimation and test of hypotheses as well as problems such as determination of sample sizes. For these, we need to first look at a basic theory, on which useful concepts and methods can be developed. B1: Normal Sampling Theorem B1:
Recall from Chapter 9 Part D: The sum of two independent Normal variables. Theorem (Part 1) 2 X ~ N ( µ1 , σ 12 ) Y ~ N (µ 2 ,σ 2 ) For random assembly of and we have X + Y ~ N ( µ1 + µ 2 , σ + σ )
2 1 2 2 Let us extend on this idea. Let
See Assignment 3 Q5 from the textbook. Problem: A lift has the following specifications: Max Load: 850kg; Capacity 12 persons The weights of passengers have a N(62.5, 5²) distribution. In certain trip, 13 passengers squeeze in to the lift. What is the probability that it will be overloaded? Solution: Solution
Xi Let = the weight of each passenger (in kg) X i ~ N ( µi = 62.5, σ i2 = 52 = 25) Are we interested in P(X≥850) =? NO! We need to find P( sum of 13 passengers ≥ 850) =? 850) =? Let us define another random variable, T13 13 where X + X + ........ + X = X =T
1 2 13 ∑
i =1 i 13 We need to find P (T13 ≥ 850) Thus, we need to know the distribution of T13 Now by theorem Normal Sampling theorem Part 1 we have: theorem
2 X 1 + X 2 + ........ + X 13 = ∑ X i = T13 ~ N ( µ13 = 13 × 62.5 = 812.5, σ 13 = 13 × 25 = 325) i =1 13 More simply Thus 2 T13 ~ N ( µ13 = 812.5, σ 13 = 325) P(T13 ≥ 850) = P( Z ≥ 850 − 812.5 ) = P( Z ≥ 2.08) = 0.5 − 0.4812 = 0.0188 325 Using calculator : Mode, 0, 5, FMLA, x=? 2.08 Gives 0.48124 Thus, the probability that the lift will be overloaded is 1.88% To summarize: Normal Sampling Theorem Part 1 Normal
The sum of independent samples of size n, X i ~ N (µ X 2 taken from the , σ X ) X + X + ........ + X = ∑ X = T
n 1 2 n i =1 i n 2 2 N ( µ n = nµ X , σ n = nσ X ) population, have a distribution, namely, n 2 2 X i ~ N ( µ n = nµ X , σ n = nσ X ) ∑ i =1 It follows that ∑X
i =1 n i − nµ X
2 X nσ ~ N (0,1) = Z B2. Normal Sampling Theorem continued continued
Suppose the population from which we sample has a normal distribution with mean μ and variance σ². 2 In other words, Population ~ N ( µ X , σ X ) If we take a sample of size n, we can compute its mean, x x The value of is not fixed; it varies from sample to sample. Imagine that many samples of size n are taken (with replacement) from the population. Thus many sample means, are obtained, xA , x forming a sample mean space.B ,....., xK ,....... It has a certain distribution, called the distribution of the sampling mean. What is the distribution of the sample mean, X ? What Normal Sampling Theorem (N.S.T.) Normal Sampling Theorem (N.S.T.) The samplemeans, , of samples of size n x
2 X i ~ N (µ X ,σ X ) taken from the population have a 2 σX N (µ X , ) distribution, namely, n 2 σX 2 X ~ N (µ X = µ X ,σ X = ) n exact More commonly it is written as: σ2 X ~ N (µ , ) n What does What 2 σX 2 X ~ N (µ X = µ X ,σ X = ) n mean? The distribution of the sample mean, X (1) follows a Normal model (2) its centre is µ X , i.e. its centre is identical to the population centre. (3) its variance is only 1 of the population n variance. variance. Here is the idea: Here ∑X
i =1 n i ~ N (nµ X , nσ )
2 X n i 2 X 2 nµ X nσ X= ~ N( , ) n n n 2 σX X ~ N (µ X , ) n
i =1 ∑X Proof: Proof This idea can be extended to give: This Central Limit Theorem (C.L.T.) Central Limit Theorem (C.L.T.) The sample means, , of samples of size n x
2 X i ~ any ( µ X , σ X ) taken from population have a 2 σX N (µ X , ) distribution, namely, n 2 σX 2 X ~ N (µ X = µ X ,σ X = ) n approx More commonly it is written as: σ2 X ~ N (µ , ) n
approx Example: Example
Consider a population which has five members: population 0, 4, 8, 12 and 16.
(a) Find the mean and variance of the population. Xi Let : 0 , 4 , 8 , 12 , 16 X X X X X <      > 0 4 8 12 16 Note: n X = 5 µX ∑x =
n i 0 + 4 + 8 + 12 + 16 = =8 5 σ n 0 + 4 2 + 82 + 12 2 + 16 2 = − 82 5 = 32
2 2 X ∑x = 2 i − µ2 (b) List all the 25 possible samples of size 2.(i.e n=2) (b) List all the 25 possible (c) Calculate the sample mean, for each of the samples x in (b). {Hint: Put the answers to (b) and (c) together in one table.}
sample x sample x
2 4 6 8 sample x sample x sample x 0 0 0 4 0 8 0 12 0 4 0 2 4 4 4 4 8 6 4 12 8 0 4 8 4 6 8 8 8 8 12 10 12 0 12 4 12 8 12 12 6 8 10 12 16 0 8 16 4 10 16 8 12 16 12 14 0 16 8 4 16 10 8 16 12 12 16 14 16 16 16 (d) Construct the sampling distribution of X . (d) Hence verify that µ X = µ X , σ = σ and n
2 X 2 X σ σ X = s.e( X ) = X nX X X 0 2
2/25 4
3/25 6
4/25 8
5/25 10
4/25 12
3/25 14
2/25 16
1/25 P( X ) 1/25 This is clearly a DISCRETE distribution but from a diagram of the above distribution, we can see that it is approximately Normal. µX ∑x f =
n ii = 8 = µX 2 2000 2 32 σ X 2 σX = − 8 = 16 = = 25 2 nX Clearly, Clearly,
X X X 2 σX 2 X ~ N (µ X = µ X ,σ X = ) n approx σ σ = s.e( X ) = n is known as the standard error of X bar. See formula sheet No. 14 c. Unbiased Estimate of μ
By N.S.T., the centre of the sample mean space is μ, ii.e. µ = µ .e. X X In the long run, X estimates μ unbiasedly. unbiasedly. This ˆ ∴ µ = x This is a repetition of Chapter 14(B)
X i ~ N (µ ,σ 2 ) Proof: E( X i ) = µ 1 ( X 1 + X 2 + ........ + X n ) n n ∑ X i ) = 1 E ( X + X + ........ + X ) E( X ) = E( 1 2 n n n 1 1 1 = {E ( X 1 ) + E ( X 2 ) + .... + E ( X n )} = {µ + µ + ..... + µ} = × nµ = µ n n n ∴ E( X ) = µ X=
i ∑X = Example 1: Example The IQ’s of five randomly selected HKU students are: 112,128,106,135,141. Estimate unbiasedly the mean IQ, μ, of all HKU of students. students. Solution: ∴ µ = x = 124.4 ˆ D. (Part 1). Estimation error Estimation
X When we use to estimate μ, we are aware that we X for any particular sample, may not equal exactly. µ The difference X − µ is called the estimation error. error Its value, however, is uncertain and unknown, as μ is unknown. D. (Part 2) Standard Error
Fortunately, we can master some information Fortunately, we can master some information about the error. σ2 2 Var ( X ) = σ X = n By N.S.T, X ~ N (µ , σ ) σ
2 n ∴ sd ( X ) = n X This means that fluctuates about μwith an with average deviation (i.e. error) equal to σ n Thus, σn is also called the standard error of X i.e. i.e. σ X = s.e( X ) = σX nX implies that the larger the sample size, the smaller that standard error of the estimation, i.e. the more stable the performance of X i.e. the more precise the estimation. σ X = s.e( X ) = σX nX D. (Part 3) The error bound D.
Now for S/S ratio z=±1.645, the diagram shows:
P( µ − 1.645 This means that, with 90% X −µ chance, the error, , will be bounded, either σ 1.645 × side, by the amount n This is called the error bound X of at 0.90 chance level:
EB ( X 0.90 ) =1.645 × σ σ < X < µ + 1.645 ) = 0.90 n n σ
n ( = E , say ) Example 2: Example
The IQ’s of HKU students have a N(μ, 64) 64) distribution. A sample of size 16 is selected. The average IQ of these 16 students is used to estimate the unknown μ. What is the standard error of estimation?
σ X = s.e( X ) = σX 8 = =2 nX 16 What is the 90% chance error bound?
EB( X 0.90 ) = 1.645 × σ 8 = 1.645 × = 1.645 × 2 = 3.29 n 16 Other error bounds: Other error bounds: σ EB( X 0.90 ) = 1.645 × n σ EB( X 0.95 ) = 1.960 × n σ EB( X 0.99 ) = 2.576 × n E. Determination of Sample Size, n
Example 3: Example 3 The IQ’s of HKU students have a N(μ, 64) 64) distribution. distribution. Suppose one wishes to be 99% sure that the error (of estimation) will not exceed 3. Find the smallest sample size required to estimate μ. Solution: EB( X ) = 2.576 × σ n We have σ
0.99 n 2.576σ 2 2.576 × 8 2 ∴n = ( ) =( ) = 47.19 ≈ 48 E 3 E = 2.576 × In general, if we are interested in estimating μand want to determine the sample size and required, then we use the formula required,
σ E = Zα 2 × n Zα 2σ 2 ∴n = ( ) E It is necessary that we have some idea of the estimation error that we are prepared to accept and the standard deviation. The Zvalues come from Normal Tables. F. Interval estimation for μ
In (C) above, we presented a single value as an estimate for μ. This is rather risky, for it is unlikely that this This single value will hit μ. It would be safer to propose an interval to interval cover μ. This interval, if properly designed, will capture μ inside it, with a good likelihood of, say, 90% or 95% or 99%. of, Now by N.S.T, we have Now by N.S.T, we have σ σ P( µ − 1.645 < X < µ + 1.645 ) = 0.90 n n On rearranging, we have P( x − 1.645 σ σ < µ < x + 1.645 ) = 0.90 n n The reason for this rearrangement is that the unknown value μ is isolated in the middle, and the two ends, x − 1.645 σ and x + 1.645 σ , are known
n
n values since n is given, σiis assumed known, and s values X can be calculated after the sample is taken. can So the interval So the interval CI ( µ ) 0.90 = x ± 1.645 σ σ σ = ( x − 1.645 , x + 1.645 ) n n n is called the 90% confidence interval for μ, will will have a 90% chance of catching μ. Example 4: X~N(μ,64), n=16. Suppose it is found that x = 120 X~N( Determine a 90% C.I. for μ. Determine Solution:
CI ( µ ) 0.90 = 120 ± 1.645 8 = (116.71,123.29) 16 Formula for CI for μwhen σis KNOWN is: when
CI ( µ ) (1−α ) = x ± Z α
2 σ σ σ = (x − Zα , x + Zα ) n n n 2 2 Example 5: Example Let X ~ N(μ,36). ,36). How small should a sample be taken so that a 95% C.I. for μ has a width not exceeding 5 units? units?
Solution: CI ( µ ) 0.95 6 = x ± 1.96 n Interval.width ≤ 5 Interval.width = 2 × Error = 2 ×1.96 Error ≤ 2.5 Zσ 2 1.96 × 6 2 ) =( ) = 22.13 Take n=23 E 2.5 ∴n ≥ ( 6 n ...
View
Full
Document
 Spring '10
 Unknown

Click to edit the document details