This preview shows pages 1–3. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.View Full Document
Unformatted text preview: CHAPTER 14 STATISTICAL INFERENCE: OTHER TWO-SAMPLE TEST STATISTICS 14.1 Introduction Learning about the F statistic and F sampling distribution that are used to test hypotheses about 2 variances and how to use a z statistic to test hypotheses about 2 population proportions 14.2 Two-Sample F Test and Confidence Interval for Variances Using Independent Samples F Test for 2 Variances (Independent Samples) Researcher might want to see if 2 populations differ in dispersion or they might want to test one of the assumptions of the t test for independent samples that the 2 unknown population variances are equal F statistic for testing the hypotheses : = : : H0 12 22 H0 12 22 H0 12 22 : : < : > H1 12 22 H1 12 22 H1 12 22 is = F larger2smaller2 where larger2 and smaller2 denote, respectively, the larger and smaller sample variance and each sample variance is computed using =-- 2 Xi X2n 1 . The degrees of freedom for the numerator and denominator are, respectively, =- 1 nlarger 2 1 and = 2 nsmaller- 2 1 . Sampling distribution of the F derived by Ronald A. Fisher and G.W. Snedecor named it after him. Like the t distr., is a family of distributions whose shape depends on its d.f. Unlike the z and t distributions that are symmetrical, the F distribution is positively skewed Shape of the F distribution approaches the normal for large values of 1 and 2 . F is a ratio of non-negative numbers so it can take on values from 0 to . Values around 1 are expected if the null hypothesis that = 12 22 is true Assumptions for the F to test a null hypothesis: 1. Independent samples 2. Populations are normally distributed 3. Participants are random samples from the populations of interest or the participants have been randomly assigned to the conditions in the experiment. F is not robust to violation of the normality assumption like the t is regardless of how large your sample is. So, unless the normality assumption is fulfilled, the probability of making a Type 1 error will not equal the preselected value of . Do NOT use the F unless you have good reason to believe the 2 variables X1 and X2 are normal. ; , F 1 2 is the critical value of F that cuts off the upper region of the sampling distribution for 1 and 2 degrees of freedom. The first denotes the df for the numerator of the F and the second denotes the df for the denominator of the F.- ; , F1 1 2 is the critical value of F that cuts off the lower region of the sampling distribution for 1 and 2 degrees of freedom. By placing the larger sample variance in the numerator and the smaller in the denominator of F, we avoid the need to know the lower tail critical values because...
View Full Document