This preview shows page 1. Sign up to view the full content.
Unformatted text preview: Steps for Hypothesis Testing
1. Hypothesis Testing 2. 3. 4. 5. 6. 7. 8. 9. H1 and H0 Determining the nature of the dependent variable Choosing the appropriate test statistic Setting Type I & Type II error rates Determining sample size Collecting data Conducting appropriate statistical tests Calculate observed effect sizes Decision Making Step 5: Determining Sample Size
Why is this important?
1. 2. 3. 4. H0 H1 Practical significance More formulaic research design Future comparisons Effect Size post comparison:
the extent to which the IV had an effect of separating the two populations  1 H0 Minimum Effect of Interest H1 Step 5: Determining Sample Size
Minimum Effect of Interest: smallest effect size important
usually based on past research or a "medium" level medium" Effect Size: "practicality" of your finding; practicality" finding; magnitude
dfamily: differences between groups or levels of the IV. rfamily: correlation coefficient between two independent variables
Already dealt with Pearson's r (magnitude + direction!) Pearson'  Step 5: Determining Sample Size
Power: the probability of detecting your minimum effect of interest
too much or too little is a problem directly related to your sample size generally, set power to be 0.80 then determine sample size 2 Effect Sizes
r (phi)
Correlation/ Pearson's r Pearson' Chisquare Test of ChiAssociation (correlation) Comparing 2 means = tttest 2+ means, nonlinear nonrelationships = ANOVA (nonlinear correlation) (non Step 5: Determining Sample Size
r = strength & directionality of a linear relationship (interval data)
[ r ] = absolute value refers to strength of relationship
[ 0.10 ] = weak relationship [ 0.25 ] = moderate relationship [0.40 + ] = strong relationship d (eta) Power Distribution
H1 The Null Distribution
H0 = true /2 /2 H0 3 Null & Sampling Distributions
H0 = true Sampling Distribution Null & Sampling Distributions
H0 = true Sampling Distribution Crit. value Crit. value Red: PC error rates (alpha) on H0 distribution Dark Green: 1speckled red 1 Blue Stripes: 1power or beta (on sampling distribution) 1Dotted Green: power or 1beta 1 More on power
Increases power:
increase N * increase MEI increase Steps for Hypothesis Testing
1. 2. 3. 4. 5. 6. 7. 8. 9. Decrease power:
decrease N decrease MEI decrease H1 and H0 Determining the nature of the dependent variable Choosing the appropriate test statistic Setting Type I & Type II error rates Determining sample size Collecting data Conducting appropriate statistical tests Calculate observed effect sizes Decision Making 4 Steps for Hypothesis Testing
Step 6  Collect Data... Data... Step 7 Conduct appropriate statistical test Do we have statistical significance?
To do this, we are comparing our results against the possibility that NOTHING happened or given that the null hypothesis is true.
We assume the null hypothesis to be true! That's the reason why we setup our distribution That' setaround the null hypothesis. H0 = true H0 = true /2 H0 /2 /2 /2 H0 5 Step 7. Statistical Significance
Comparing our observed values to our critical values.
If our observed values fall beyond the critical values, then we reject the null hypothesis
The probability of finding that observed value or greater given that the null hypothesis IS TRUE is less than 0.05. (i.e., what we set alpha to be!) Step 8. Practical Significance
Asking the question... question...
Do we have a result that is both statistically and practically significant? If our observed value does not fall beyond the critical values, then we retain the null hypothesis.
The probability of finding that observed value or greater given that the null hypothesis IS TRUE is greater than 0.05. Step 8. Practical Significance
"significant" = statistically significant significant"
The difference, even if trivial, is not due to chance. Step 8. Practical Significance
How to determine:
If your effect size is greater than your MEI, then you have practical significance. Effect Sizes go beyond statements about chance.
Concentrate on "importance" of findings importance" Example: Cohen's d Cohen' [effect size] > [MEI] 6 Step 9: Decision
Putting it all together... together...
Do we have statistical significance? Yes/No Do we have practical significance? Yes/No
Practical Significance (related to effect sizes) Step 9: Decisions Overlook finding?
Statistical Significance (related to pvalues) pYES
p< NO
p>
power too low YES
ES > MEI SIGNIFICANT!
power too high? Type I error? too many subjects? # subjects
NOT SIGNIFICANT NO
ES < MEI Trivial finding? More on power
Increases power:
increase MEI increase increase N* Decrease power:
decrease MEI decrease decrease N 7 8 9 Now, let's see it all in action... 10 ...
View
Full
Document
This note was uploaded on 04/07/2008 for the course PSY 031 taught by Professor Dicorcia during the Spring '08 term at Tufts.
 Spring '08
 Dicorcia

Click to edit the document details