Unformatted text preview: Two Sample Comparisons Part 1
Chapter 10 Two Sample Comparison  I 1 Comparison Problems
Take a sample from two separate populations. Compare the statistics (???) of interest. Are the statistics different enough for us to say there is a difference in the populations? Two Sample Comparison  I 2 Some Examples
1. Does a soft drink sell better on the end of an aisle than in the middle? 2. Are workaholics more often men or women? 3. How much do people save using an online auto insurance company? Two Sample Comparison  I 3 What we could do:
1. Compare _______ to ________. Goal? 1. Compare _______ to ________. Goal? 1. Compare _______ to ________. Goal? Two Sample Comparison  I 4 Organization of this topic
1. 2. 3. 4. 5. Comparing means, independent populations Comparing means, related populations Comparing proportions Comparing variances Comparing medians (Chapter 12) Two Sample Comparison  I 5 Notation is a little complicated
We now have two means, two standard deviations, two sample sizes. We will use subscripts to keep it all straight. Population 1: 1 and 1 Sample 1: n1 Xbar1 and S1 For population 2, use n2 etc.
Two Sample Comparison  I 6 Comparing population means We might want to estimate the difference ( 1  2) using the sample data. Or, it could be a test H 0: 1 = 2 The test can be restated H 0: 1  2 = 0 so it is almost the same problem Two Sample Comparison  I 7 Estimating the difference Because we want to estimate (1  2) we need to know something about the distribution of (Xbar1  Xbar2) We will first look at the (unlikely?) case when we know 1 and 2. Two Sample Comparison  I 8 Theory
From theory about functions of random variables, we would just combine the two standard errors Var ( x1  x 2 ) = + n1 n2
2 1 2 2 Two Sample Comparison  I 9 Confidence interval A confidence interval is generated by: ( x1  x 2 ) ME Where the ME is given by: ME = Z / 2 + n1 n2
2 1 2 2 Two Sample Comparison  I 10 Hypothesis test
If we just want to test to determine if there is a difference, we would look at: H0: 1  2 = D0 H1: 1  2 D0 where D0 = 0 (no difference). Two Sample Comparison  I 11 The test statistic
Compute: Z= ( x1  x 2 )  D0 + n1 n2
2 1 2 2 Decision rule: At = .05, Reject H0 if ZCALC > 1.96 or if ZCALC < 1.96
Two Sample Comparison  I 12 Waiting times at O'Marios
At O'Marios IrishItalian restaurant, the standard deviation in waiting time is 6 minutes. On Thursday night, a sample of 26 customer groups waited an average of 38.5 minutes before being seated. On Saturday night, a sample of 32 groups waited an average of 43.2 minutes. 1. Test to determine if there was a difference 2. Estimate the average difference. Two Sample Comparison  I 13 Hypothesis Test
Hypotheses: Decision Rule: Results: Two Sample Comparison  I 14 Interval estimation
Interval: Interpretation: Two Sample Comparison  I 15 Population Variances Unknown If you don't know the means, how would you know 1 and 2 ? The known variance case does not happen very often in practice. We will cover three different methods for estimating them from the sample.
Two Sample Comparison  I 16 10.1: Independent Samples Two samples, drawn independently of each other. How should we go about estimating: Std Error = + n1 n2
2 1 2 2 The use of sample SDs puts us in a case where we will use a tdistribution, but the d.f. to use depends on an additional assumption.
Two Sample Comparison  I 17 An example
Do male QVSN customers spend different amounts than females? We want an interval estimate of the difference.
Males Females Avg. S n 462.72 46.92 20
Two Sample Comparison  I 501.33 50.37 25
18 Population variances equal 1 and 2, let us suppose they are about equal. That looks reasonable here. If that is the case, we really only need to estimate their common value. We can combine the two samples for the purpose of doing this. Even though we don't know, Two Sample Comparison  I 19 Pooling the samples
Under this procedure we combine or "pool" the samples together to estimate the "common" value of X and Y . (n1  1) S + (n2  1) S S = (n1 + n2  2)
2 p 2 1
Two Sample Comparison  I 2 2 20 Standard error and d.f. 1 1 Std Error = s + n n 2 1
2 p df = n1 + n2  2
Two Sample Comparison  I 21 Our QVSN customer example
Males Females Avg. S n 462.72 46.92 20 501.33 50.37 25 Two Sample Comparison  I 22 What if you can't assume 1 = 2? We will keep the sample variances "separate". No pooling: s s Std Error = + n1 n2
"turn ugly":
Two Sample Comparison  I 2 1 2 2 It is another Tdistribution, but the df 23 Satterthwaite's formula
The degrees of freedom need to be calculated from: s s ( n ) + ( n ) 2 1 v= 2 2 2 2 s1 s2 /(n1  1) + /(n2  1) n n 1 2
2 1 2 2
Two Sample Comparison  I 24 2 #$%@*&^!!! I will not require the calculation of this by hand. PhStat has this built in to its computer routine. Upper bound on df: ______________ Lower bound on df: ______________ Two Sample Comparison  I 25 Amalgamated Distributors
Samples of accounts receivable at their two offices. The AR managers have different collection philosophies. Can we conclude there is a difference, on average?
East Office Avg. S n 290 15 16
Two Sample Comparison  I West Office 250 50 11
26 Test for differences Will do twosided since we did not know the "direction" ahead of time. Two Sample Comparison  I 27 PHstat procedures (1)
Use one of these if you have the data in a file. Two Sample Comparison  I 28 PHstat procedures (2)
Use one of these if you only have the summary statistics (means and SDs). Two Sample Comparison  I 29 Workbook Alternatives
PooledVariance T.xls SeparateVariance T.xls The pooled procedure will do both tests and a confidence interval. Later will look at workbooks for other twosample problems.
Two Sample Comparison  I 30 Textbook example (page 340) 10 pizzas were ordered at various times from a local restaurant Another 10 from a national chain Data in PizzaTime.xls Can we conclude the local restaurant delivers faster? Two Sample Comparison  I 31 Two Sample Comparisons Part 2
10.2 and 10.3 Two Sample Comparison  I 32 10.1 Comparing 1 and 2 were not known, so we had to estimate them from the sample. One Tdistribution procedure operated under the assumption that the two population variances were about equal. The second kept them "separate" but had the degrees of freedom computed by a formula. We looked at cases when the two
Two Sample Comparison  I 33 10.2: Matched Pair Problems Now suppose each observation in sample 1 is matched with a specific one in sample 2. Pizza delivery: At 10 different times, we order a pizza from the local restaurant. At the same time we order a similar pie from the national chain. We record how long it takes to deliver each pizza and compare it to its "match".
Two Sample Comparison  I 34 Matching controls for outside factors Some types of pizza will naturally take longer to make. Compare ______ to ______. At different times, both restaurants will be busier so service takes longer. Compare _____ to _____. Sometimes traffic will be heavier so delivery takes longer. Compare _____ to _____. Two Sample Comparison  I 35 PairedT method
1. Look at the differences between pairs: di = (X1i X2i) This gives you one set of measurements d1, d2, ... , dn. Now compute Dbar and SD Do a onesample confidence interval or Ttest. There are n1 d.f. Two Sample Comparison  I
Two Sample Comparison  I 1. 1. 1. 36 Matched pair method
Hypothesis: H0 : 1  2 = D = D 0 H1: 1  2 = D D0 D  D0 T= sD n Test statistic: sD
Or an interval with Std Error = Two Sample Comparison  I n
37 So easy a gecko can do it? 10 potential auto insurance customers get quotes from a local agent and an online insurance company. Some "profiles" will have high rates regardless. Others will be lower. Matching across customer type should help "block out" the risk factor and help us get a better estimate of the cost difference.
Two Sample Comparison  I 38 AutoInsurance.XLS
Auto insurances quotes for 10 potential clients Local is quote from local agent of a national insurer Online is a frequentlyadvertised internet company Client 1 2 3 4 5 6 7 8 9 10 Risk Low Med Low High Med High Med Med High Med Local 568 872 451 1229 605 1021 783 844 907 712 Online 391 602 488 903 633 1027 634 689 921 702
39 Two Sample Comparison  I TwoSample BoxPlot
Auto Insurance Quotes Don't appear very different?
Online Local 390 490 590 690 790 890 990 1090 1190 1290 Two Sample Comparison  I 40 Computing the Differences
Client 1 2 3 4 5 6 7 8 9 10 Risk Low Med Low High Med High Med Med High Med Mean SD Local 568 872 451 1229 605 1021 783 844 907 712 799.20 229.28 Online 391 602 488 903 633 1027 634 689 921 702 699.00 198.88 Diff 177 270 37 326 28 6 149 155 14 10 100.20 132.84 D sD
41 Two Sample Comparison  I Confidence interval
Compute the interval. 2. Does the interval include zero?
1. Two Sample Comparison  I 42 Textbook's pizza example
Data are in PizzaTime.xls. We used this data earlier but did not know about the matching. This time we will take advantage of the matching, and will test: H0: Local  National = 0 (No difference) H1: Local  National < 0 (Local faster)
Two Sample Comparison  I 43 Results
Is Local faster? Data Hypothesized Mean Diff. Level of significance Intermediate Calculations Sample Size DBar degrees of freedom SD Standard Error t Test Statistic LowerTail Test Lower Critical Value p Value Reject the null hypothesis 0 0.05 10 2.1800 9 2.2641 0.7160 3.0448 1.8331 0.0070 Two Sample Comparison  I 44 103: Comparing two proportions From each sample, we count the number of observations that have some attribute, then compute the proportion that do. We are interested in estimating the difference ( 1  2 ) between the two population proportions. Other procedures exist, but we will look only at a method that requires both samples to be large.
Two Sample Comparison  I 45 Notation
Let x1 and n1 be the number in sample 1 that have the attribute the size of the first sample x1 Then is our estimate of 1 . p1 = n1
(Similar quantities in the second sample).
Two Sample Comparison  I 46 Interval estimate The form is: ( p1  p2 ) ME where the margin of error is: ME = Z / 2 p1 (1  p1 ) p2 (1  p2 ) + n1 n2
Two Sample Comparison  I 47 Example People often choose a physician by word of mouth. Is the frequency the same in small towns as in large cities? Do 95% interval: Towns Cities X n
Two Sample Comparison  I 350 390 550 655
48 Interval Estimate Estimate of difference: Is the difference significant? Two Sample Comparison  I 49 Testing proportions for equality
Suppose we want to test two population proportions for equality: H0: 1  2 = 0 H1: 1  2 0 If 1 and 2 are equal, we should "pool" p1 and p2 together to estimate the common value.
Two Sample Comparison  I 50 Pooled standard error Compute the pooled proportion: X1 +X 2 p = n +n 2 1 Then use it in the standard error 1 1 p (1  p ) + n n 2 1
Two Sample Comparison  I 51 Do men workaholics like their job more? USA Today surveyed almost 1600 workaholics (people who worked 60 + hours a week). One of the reasons cited for the long hours was they loved their job because it was stimulating or challenging. Was this reason cited equally by gender, or was it more a characteristic of males? Results (page 357358): Men Women X n
Two Sample Comparison  I 707 786 638 778
52 Results (from computer) Two Sample Comparison  I 53 Two Sample Comparison Part 3 of 3
Sections 10.4 and 14.5 Two Sample Comparison  I 54 Perceived restaurant quality Does spending more in a restaurant lead to greater customer satisfaction? Readers of a consumer magazine rated 29 chain restaurants for satisfaction (scale of 0 to 100) based on food quality. The restaurants were categorized as highpriced or lowpriced depending on the average expenditure per person. Data are in FoodQuality.xls.
Two Sample Comparison  I 55 Do higherpriced restaurants have higher quality? Customer satisfaction (0 to 100) about taste of food Price of Meal LoPrice HiPrice 59 75 62 77 73 77 76 78 77 79 78 79 79 79 80 79 80 80 81 80 81 81 83 81 83 82 82 82 83 83 84 Data and statistics
LoPrice 13 76.31 7.57 HiPrice 18 80.06 2.41 n average std dev Will do onesided tests under both Ttest procedures Two Sample Comparison  I 56 TTest Results
Pooled variance
Are HighPriced Restaurants Rated Better? (assumes equal population variances) Data Hypothesized Difference Level of Significance Intermediate Calculations Population 1 Sample Degrees of Freedom Population 2 Sample Degrees of Freedom Total Degrees of Freedom Pooled Variance Standard Error Difference in Sample Means t Test Statistic LowerTail Test Lower Critical Value p Value Reject the null hypothesis 0 0.05 Separate Variance
Are HighPriced Resaurants Rated Higher? (assumes unequal population variances) Data Hypothesized Difference Level of Significance Intermediate Calculations Numerator of Degrees of Freedom Denominator of Degrees of Freedom Total Degrees of Freedom Degrees of Freedom Standard Error Difference in Sample Means SeparateVariance t Test Statistic LowerTail Test 1.6991 0.0287 Lower Critical Value p Value Do not reject the null hypothesis 1.7709 0.0542 0 0.05 12 17 29 27.09358 1.8946 3.747863 1.9782 22.3324 1.6212 13.7750 13 2.1739 3.747863248 1.7241 Two Sample Comparison  I 57 Equal Variances or Not? For comparing population means from two independent samples, we have looked at two procedures. One assumes that the two population variances are about equal, the other does not. Which one we use can make a difference in our conclusion, particularly when samples are not large.
Two Sample Comparison  I 58 Box plots
Restaurant quality ratings HiPrice LoPrice 50 55 60 65 70 75 80 85 90 Two Sample Comparison  I 59 10.4: Testing for equality of variances For small and moderate samples, it is useful to have a way to test for equality instead of just blindly assuming one case or the other. We can thus consider the test: H0 : = H1 : 2 1 2 1 2 2 2 2 (Use pooled variance) (Use separate variance) Two Sample Comparison  I 60 Formation of this test For tests comparing population means, we base the test statistic on the difference in sample means. For tests about variances, we instead form a ratio of one sample variance to the other. We are going to look at a variation of this test that always has the larger variance in the numerator.
Two Sample Comparison  I 61 Our test S Test Statistic is F = S 2 larger 2 smaller Slarger denotes the larger of our two sample standard deviations. Always put it in the numerator so we are looking at a ratio bigger than one. Let's call the sample size for this sample nlarger The degrees of freedom are (nlarger1) for the numerator and (nsmaller1) for the denominator, and the test uses an F distribution.
Two Sample Comparison  I 62 F tests, in general This is the first of several hypothesis testing procedures that we will see that use the F distribution. They all compare sources of variation for equality by looking at a ratio. F ratios have two degrees of freedom parameters, one for numerator and one for denominator.
Two Sample Comparison  I 63 F Table Setup
Appendix Table E.5. On pages 742 is the table for = .05 Numerator d.f. denom. d.f. Value of F at a significance level of .05 Two Sample Comparison  I 64 What is a significant F value? Because an F is the ratio of two positive values, all F values are bigger than 0. If the two population variances are about equal, we should see the sample variances about the same, so the F ratio should be about 1. If the two population variances are not equal, the F value should be substantially larger than 1. A quick scan of the table shows values of 2 to 3 are typically significant at = .05.
Two Sample Comparison  I 65 Implementing this test Always make sample with larger SD the numerator or sample 1. By hand: use F = 2.5 as an approximate critical value. In PHStat ask for an uppertail test. It will look up correct critical value. Use that. Two Sample Comparison  I 66 PHStat/Template output
Are variances different? Data Level of Significance LargerVariance Sample Sample Size Sample Variance SmallerVariance Sample Sample Size Sample Variance Intermediate Calculations F Test Statistic Population 1 Sample Degrees of Freedom Population 2 Sample Degrees of Freedom UpperTail Test Upper Critical Value p Value Reject the null hypothesis 0.05 13 57.23077 18 5.820261 9.8330 12 17 2.3807 0.0000 Two Sample Comparison  I 67 Suppose the data are badly behaved? Both versions of the Ttest comparing means assume the samples come from normallydistributed populations. If this is not true, we need to be concerned about sample size. With small or mediumsized samples, an alternative procedure might be better. Two Sample Comparison  I 68 Restaurant quality not normal
Restaurant quality ratings HiPrice LoPrice 50 55 60 65 70 75 80 85 90 Two Sample Comparison  I 69 Procedures based on ranks There are several statistical techniques discussed in Chapter 12 that are based on the ranks of the data. These techniques make much weaker assumptions about the data, mainly that the data represent a random sample from some distribution. Two Sample Comparison  I 70 Ranking: a small example
Sample 1 22 24 31 32 Sample 2 11 18 24 Rank Sum: Average Rank: What would be the average rank if both samples came from the same distribution? ______
Two Sample Comparison  I 71 Ranks of 1 Ranks of 2 Our revised hypothesis test
H0: The two populations have the same distribution. H1: The two populations have different distributions (or higher/lower). Under H0 the average rank of the data in sample 1 would be about equal to the average rank of the data in sample 2.
Two Sample Comparison  I 72 12.5: Wilcoxon Rank Sum Procedure
1) 2) 3) 4) Let n1 = size of the smaller of the samples. Combine the data ( n = n1 + n2 ) . Rank from smallest (1) to largest (n). Add up the ranks from the smaller of the two samples: T1 The test statistic compares T1 to its expected value via a Zstatistic Two Sample Comparison  I 73 PHStat or Workbook Output Ratings higher? Data Level of Significance Population 1 Sample Sample Size Sum of Ranks Population 2 Sample Sample Size Sum of Ranks 0.05 13 175.5 18 320.5 Intermediate Calculations Total Sample Size n 31 T 1 Test Statistic 175.5 T 1 Mean 208 Standard Error of T 1 24.97999 Z Test Statistic 1.301041 LowerTail Test Lower Critical Value 1.644854 p Value 0.096622 Do not reject the null hypothesis
Two Sample Comparison  I 74 ...
View
Full
Document
This note was uploaded on 02/14/2011 for the course QMB 3250 taught by Professor Thompson during the Spring '08 term at University of Florida.
 Spring '08
 Thompson

Click to edit the document details