Week_7_Central_Limit_Theorem

Week_7_Central_Limit_Theorem - Week 7 Sampling...

Info iconThis preview shows pages 1–3. Sign up to view the full content.

View Full Document Right Arrow Icon
C:\Documents and Settings\AYounger\My Documents\Teaching\Math Stats\Lecture Notes\Week 7 Central Limit Theorem.docx 1 Week 7 Sampling Distributions & the Central Limit Theorem (WMS Ch 7) 1 INTRODUCTION Chapter 5 was an important turning point. Chapter 2 introduced the ideas of random events and of a mathematical “probability measure”. Chapters 3 through 5 investigated a wide variety of “theoretical distributions” appropriate to different sorts of random experiment. Starting with Chapter 7, we now begin to focus more closely on “statistics”. These are various functions of the observed values of random variables found in our sample data. Eventually, we use them to infer certain things about the “population” from which they were drawn. So, from here on, you should feel that there is a more obvious “practical” side to the theory than may have been obvious hitherto. There is also a good deal more opportunity to carry out simulations and analyses using R (or whatever other favourite software you may prefer). On the other hand, we now must be very careful to understand the differences between sample and population distributions. Chapter 7 is also important because it introduces the Central Limit Theorem – an idea that is central to applied statistics and econometrics as you go forward. We have skipped Chapter 6 in the interest of time. But in the following notes you will find explanations of a few things from Chapter 6 that we really need. 2 SAMPLING AND POPULATION DISTRIBUTIONS The basic situation is illustrated: The basic idea: Draw a random sample of n observations from the population Use those observations to calculate an estimator for the population statistic Our concern is with the error of the estimate If we can find the probability distribution of the estimator , we can calculate the probability of error Key provisos: Sampling is random Pop N large relative to sample n We treat the observations as independent and identically distributed (iid) random variables. Population Mean: µ St Dev: σ Sample Mean: Y μ St Dev: Y σ
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
C:\Documents and Settings\AYounger\My Documents\Teaching\Math Stats\Lecture Notes\Week 7 Central Limit Theorem.docx 2 (This seems like a shift in terminology. Consider the example of height. Hitherto, we would have said that “height” is an RV defined on a sample space of university students, for example. Now we want to talk of each observed height y i is itself an RV Y i . I’m not completely convinced that this consistent with what we have done up until now…but let’s just accept it for now and proceed…) 3 APPLICATION TO A SAMPLING MEAN Continue with the example of the height of university students. In principle, there is a “true” population mean 1 µ that we could calculate with a 100% coverage survey. But instead, for reasons of time and cost, we decide just to examine a sample of n Toronto-area 2 students. We therefore have a set of observations {y 1 , y 2 …y n
Background image of page 2
Image of page 3
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 9

Week_7_Central_Limit_Theorem - Week 7 Sampling...

This preview shows document pages 1 - 3. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online