lecturenotesch16 - Section 16.2 Independent Random Samples...

Info iconThis preview shows pages 1–6. Sign up to view the full content.

View Full Document Right Arrow Icon
+ + Section 16.2: Independent Random Sam- ples There are two populations. If you want to study them individually, use the methods of Chapter 15. In this section we learn how to compare the populations, using estimation and hypothesis testing. The Frst population has mean μ X , standard deviation σ X and variance σ 2 X . (Obviously, it is redundant to mention both the standard deviation and variance, but both will appear frequently in this section.) The second population has mean μ Y , stan- dard deviation σ Y and variance σ 2 Y . Of course, the researcher does not know these six (four) numbers, but Nature does. + 291
Background image of page 1

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
+ + We begin with the problem of estimation. Our goal is to estimate μ X μ Y . Our data consist of independent random samples from the two populations. Denote the data from the Frst population by: x 1 ,x 2 ,. ..,x n 1 , and denote the data from the second popula- tion by: y 1 ,y 2 ,.. .,y n 2 . It is, of course, important to look at the data and think about the purpose of the research. If it seems reasonable scientiFcally to com- pare the two populations by comparing their means, then we will proceed with the methods introduced in this section. We summarize our two sets of data by com- puting their means and standard deviations, which are denoted by: + 292
Background image of page 2
+ + ¯ X,S X , ¯ Y and S Y when we view them as random variables, with observed values: ¯ x,s X , ¯ y and s Y . Our point estimate of μ X μ Y is ¯ X ¯ Y There is a CLT for this problem too. First, it shows us how to standardize our estimator: W = ( ¯ X ¯ Y ) ( μ X μ Y ) r ( σ 2 X /n 1 ) + ( σ 2 Y /n 2 ) . Second, it states that we can approximate probabilities for W by using the snc and that in the limit as both sample sizes become larger and larger, the approximations are accurate. This all should sound familiar. It is very similar to what we had in Chapter 15 which was itself very similar to what we had in Chapters 6 and 7. + 293
Background image of page 3

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
+ + If we were all Statistics or Math majors we would spend a great deal of time going into more algebraic detail, but since we are not, we won’t. The most obvious diFerence in this chapter is that W is much messier than the Z we had in Chapter 15. ±irst, we need to eliminate the unknown pa- rameters in the denominator of W . Because there are now two where in Chapter 15 there was one, this will require additional care. Sec- ond, we will need to decide what to use for our reference curve: the snc of the CLT (and Slutsky) or the t curves of Gosset. When all the smoke has cleared, statisticians suggest three methods, referred to in the text as Cases 1, 2 and 3. I personally think that Case 2 is scienti²cally worthless, so we won’t cover it. (It is mathematically interesting, which is, IMHO, why books feature it. Me, I put it in the book b/c I wanted someone else to use my book too.) + 294
Background image of page 4
+ + We will begin with Case 3, which I call the large sample approximation in the text. (Yes, I succumbed to popular terminology.) Case 3 makes a lot of sense to the new stu-
Background image of page 5

Info iconThis preview has intentionally blurred sections. Sign up to view the full version.

View Full DocumentRight Arrow Icon
Image of page 6
This is the end of the preview. Sign up to access the rest of the document.

Page1 / 19

lecturenotesch16 - Section 16.2 Independent Random Samples...

This preview shows document pages 1 - 6. Sign up to view the full document.

View Full Document Right Arrow Icon
Ask a homework question - tutors are online