This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: STA 3024 Introduction to Statistics 2 Chapter 2: Statistical Inference In our last review, we looked at sampling distributions. However, when working with those sampling distributions, we assumed that all the parameters are already known and given to us which is a bit unrealistic (why?). In this chapter, well be working under more realistic condition; that is, we assume that the parameters of the populations are unknown constant which we are trying to estimate by statistics from random samples. Thus introduces statistical inference methods . Inference methods help us to predict how close a sample statistic falls to the population parameter. We can make decisions and predictions about populations even if we only have data for relatively few subjects from that population. This chapter introduces two major types of statistical inference methods: confidence intervals and hypothesis tests. 1 PART I - CONFIDENCE INTERVALS The idea is that, true we do not know what the population parameter is but we can give a set, a range of reasonable estimates of the unknown parameter. For example, we do not know what is the mean/average amount of time American college students watch TV per week, yet we say Its somewhere around 2 hours to 5 hours. The interval (or time frame in this case) two to five hours is a confidence interval. A confidence interval (CI) is an interval containing the most believable values for a parameter. The probability that this method produces an interval that contains the pa- rameters is called the confidence level , usually denoted as (1- )%. The significant level is which well more familiar with later on in this chapter. We may choose pretty much any confidence level; however, its common sense to choose a high (in the 90% range) confidence level. Also, a choice of 100% confidence level might result in useless CI. Take our hour-watching-TV example, its not quite informative to say that I am 100% certaint that the average amount of time American college students watch TV per week is between 0 hour to 168 hours. The most three common confidence level are (from most to less) 95%, 99%, and 90%. So what does it mean by saying We are 95% confident that the average amount of time American college students watch TV per week falls between 1 . 70 hours and 3 . 17 hours? What this means is that if we repeated our study googol number of times and calculated a 95% confidence interval each time, then very nearly 95% of those googols of confidence intervals would actually contains the true value of the parameter, the mean amount of time American college students watch TV per week. Fine, but how can we calculate a 95% confidence interval or how can we calculate any confidence interval for each sample taken for that matter?...
View Full Document
- Spring '08