1
Analyzing Experimental Data
Descriptive statistics (means, medians, sds, variance) vs. Inferential
statistics
Could an observed difference between conditions/groups have
occurred by chance?
The effect of error variance.
Need some objective way to determine the likelihood of an
observed effect being due to chance:
pvalue.
Method for determining the significance of an observed difference
Hypothesis testing
Experimental hypothesis (there s a difference)
Null hypothesis (no difference, or no difference in the expected
direction).
Hypothesis testing
Types of errors in hypothesis testing
Effect sizes
Hypothesis testing is a blackwhite distinction.
You either are
significant or not.
Usually want to also know the size of the difference: the
EFFECT SIZE
Confidence interval of the difference between two groups
Hypothesis testing of the mean difference between TWO groups
ttest
Analogy
playing darts.
If you observe that the average score by player A is 100 and
the average score by player B is 85, what percentage of the
time would you expect player B to beat player A?
Need to know the consistency of each player s score, right???
Draw the two curves and determine what percentage of the
time player B s values will be higher than player A s.
Goal is to determine whether the computed difference is
significantly different from 0.
1.
Calculate the means of the 2 groups
2. Calculate the s.e. of the mean difference
NOTE:
The book is in error here!!!
See above for computations.
3. Find the calculated value of t
t is an index of effect size.
Analogous to zscore but for small sample sizes
To find t, it s like finding zscores.
..
4. Find the critical value of t
This depends on
your alpha level  how different must they be
to conclude statistically significant ?
Use a table (Appendix A2, p. 413).
Must know the df  proportional to sample size.
Bigger
samples means a better estimate.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document2
df = n
1
+n
2
2
The sign of t
5. Compare the calculated t with the critical t.
Is t
calculated
more extreme (farther from 0) than
t
critical
.
Computational example:
gender and height from class
Withinsubject analyses (pairedt test)
Only look at differences within a subject, not between.
Robustness of the ttest
The methods discussed assume
underlying distributions are normal
variance in each group is approximately equal
Deviations from these assumptions can invalidate your
conclusions
Analysis of Variance (App. C).
The problem:
inflated Type I error by using lots of ttests.
2 levels, 1 ttest: alpha = .05
3 levels, 3 ttests (AB AC BC), alpha = .14
4 levels, 6 ttests (AB, AC, AD, BC, BD, CD), alpha = .26
ANOVA checks for the presence of any difference due to IV levels.
All levels checked simultaneously
Two outcomes:
IV had no effect, or some level of the IV had an
effect on the DV.
Hypothesis testing
H
0
:
m
1
=
m
2
=
m
3
=....
=
m
k
H
a
:
not (H
0
:
m
1
=
m
2
=
m
3
=....
=
m
k
)
When you reject the null hypothesis, H
0
, you do not know which of
the means were different from which others
Post hoc tests required
Conceptually  how it works
Check if betweengroup variance is larger than withingroup
variance
This is the end of the preview.
Sign up
to
access the rest of the document.
 Spring '11
 chance
 Psychology, Normal Distribution, researcher, error variance

Click to edit the document details