This preview shows pages 1–4. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full DocumentThis preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
Unformatted text preview: Inferences for Distributions Inference for the Mean of a Population Confidence intervals and tests of significance for the mean are based on the sample mean x . The sampling distribution of x has as its mean. That is, x is an unbiased estimator of the unknown . The spread of x depends on the sample size and also on the population standard deviation . In the previous chapter we made the unrealistic assumption that the we knew the value of . In practice, is unknown. We must estimate from the data even though we are primarily interested in . The need to estimate changes some details of tests and confidence intervals for , but not their interpretation. Conditions for inference about a mean Our data are a simple random sample (SRS) of size n from the population of interest. Observations from the population have normal distribution with mean and standard deviation . In practice, it is enough that the distribution be symmetric and single peaked unless the sample is very small. Special Note Both and are unknown parameters. We estimate with the sample standard deviation s . The sample mean x has the normal distribution with mean and standard deviation n . We estimate n with n s . This quantity is called the standard error of the sample mean x . When the standard deviation of a statistic is estimated from the data, the result is called the standard error of the statistic. 1 The t distribution When we know the value of , we base confidence intervals and tests for on the onesample z statistic n x z  = This z statistic has the standard distribution N(0,1) When we do not know , we substitute the standard error n s of x for its standard deviation n . This statistic that results does not have a normal distribution . It has a distribution that is new to us, called a t distribution . The spread of the t distribution is a bit greater than that of the standard normal distribution. The t distribution has more probability in the tails and less in the center than does the standard normal. This is true because substituting the estimate s for the fixed parameter introduces more variation into the statistic. As the degrees of freedom k increase, the t(k) density curve approaches the N(0,1) curve ever more closely. This happens because s estimates more accurately as the sample size increases. So using s in place of causes little extra variation when the sample is large. The onesample t procedures Confidence Interval Procedure 2 Assuming the conditions are met, a level C Confidence Interval for is n s t x * Where * t is the upper 2 ) 1 ( C critical value for the t(n1) distribution....
View
Full
Document
 Fall '08
 O

Click to edit the document details