This preview shows page 1. Sign up to view the full content.
Unformatted text preview: SE 10 Research Design
Lecture 3 Correlation and internal validity 1 What is research for? We try to find out if two concepts are related Already accomplished: Careful definition of terms Measurement of variables For today Start with a prediction about relationship between variables Consider potential relationships
2 Hypotheses Statement about the predicted relationship between two or more variables In statistics we use a formal statement regarding the exact values of the variables Here, we'll use words Based on research, theory, or a hunch 3 Each project has two hypotheses Null Hypothesis (H0): The hypothesis of no effect Research Hypothesis (H1): What you think will happen A does not affect B; No change in mean value of B When A increases, B will increase/decrease; A will be associated with a change in B 4 What do you say about hypotheses at the end of the project? You first talk about the null hypothesis Can never prove or disprove it You REJECT the null hypothesis; or You FAIL TO REJECT the null hypothesis Then you move to the research hypothesis Fail to Reject does not mean "accept" Can never prove or disprove it You find support for the research hypothesis; or You do not find support for the research hypothesis
5 Were you correct? There is an unknown truth about the relationship in "real life." In real life the null hypothesis is either true or untrue We can never be 100% certain we were right about our conclusions Rejecting/failing to reject the null hypothesis can be an error depending on the real world truth
6 Types of errors In the real world H0 is true no relationship between variables In the real world H0 is false there is a relationship between variables Rejecting H0 would be an error claiming there is a relationship when there isn't Failing to reject H0 would be an error weren't able to detect a true relationship in your study
7 We have names for these errors Type I error: rejecting the null hypothesis when in reality it is true Type II error: not rejecting the null hypothesis when in reality it is false 8 Real World Your decision H0 True A does not affect B You don't find evidence that it does CORRECT P = 1 ; called "Power" A does not affect B Your study shows an effect H0 REJCTED TYPE I ERROR P = ; researcher sets H0 False A does affect B You don't find evidence of the effect TYPE II ERROR P = A does affect B Your study shows an effect CORRECT P = 1 H0 NOT rejected 9 What is ? Statistics come with a "pvalue" Researcher decides what probability he/she is willing to accept; typically .05 or .01 "Set" to that level If p will reject the null hypothesis
10 Pvalue is the probability that the findings from your sample are a result of error What is ? the likelihood of accepting the null hypothesis when it is wrong 1 is the "power" of a test: likelihood of finding a true effect is is affected by many things: sample size, variability, , effect size If you set higher (more type I error) will also be higher (less type II error) Trade off between type I and II errors
11 Hypotheses, correlation, and causation Need to understand hypotheses to talk about correlation and causation Improve support for your hypothesis can't prove it Research typically looking to find causes for phenomena Theory talks about causes Hypotheses talk about association 12 Correlation Co Relate do the variables change together? When X goes up does Y go up (positive correlation) or down (negative correlation) If you know X, do have enough information to predict Y? Correlation coefficient (r) ranges from 1 to 1
13 Correlation 14 CORRELATION DOES NOT MEAN CAUSATION
15 Requirements for Causation Covariation (correlation) Proper time order Just because A and B are correlated does not mean they are causally related, but correlation is required to prove causation A must occur in time before B Ruling out alternative hypotheses Other reasons for the relationship between A and B
16 Many different reasons for relationship between A and B
A and B covary A B We want to know if A B What if: A B
17 Reverse causation
A B A B Both examples of direct causation The reverse causation explains correlation between A and B without A causing B
18 Reciprocal Causation
A B Feedback loop: A causes B and B causes A "the chicken or the egg?"
19 Spurious Causation
C is a "confounding variable" A B C C causes A and C causes B, explaining the correlation between A and B C must occur in time before A and before B
20 Measurement Association
L is a latent variable or "construct" A and B measure L (remember the conceptualizing process) L B A A and B only related because of L
21 Indirect Causation
A I B I is an "Intervening Variable" or "Mediating Variable" A causes I and I causes B, explaining the correlation between A and B
22 How do you find out what kind of relationship you found? Rule out alternative hypotheses In research design, before you start collecting data, think of other factors that might be related Think of other factors that some people will try to claim are related Measure these other factors and use statistics to see relationship
23 It is important to think of these before you run results Factors that might cause relationship Even when you include all the other factors you think might be related you still cannot prove hypothesis often cannot go back and collect more data Cannot test for every possible factor, just a lot of plausible ones Testing many factors improves support for your hypothesis
24 So you "proved" causation Research is always evolving Statistical significance vs. social significance Someone will think of something you didn't test As society changes, causation may change Researchers often only use statistics to find out if there is an effect Can also find out how large the effect is this could be more important
25 How sure are you? How do we know if we have found a causal relationship? Good research methods improve your certainty Clear question/definition of terms Valid measurement Study design 26 Study design Within subjects design: compare measurements at time 1 to measurements at time 2 Between subjects (groups) design: compare measurements taken from two or more groups 27 Internal Validity How sure are you that the results of your study are due to a true causal relationship? Can you draw conclusions about the relationship between the independent variable ("cause") and the dependent variable (outcome)?
28 How do we determine internal validity? Is our conclusion plausible? Are there other explanations for our results? Subjects (will discuss later) Study design Can we rule out rival hypotheses? 29 Ruling out rival hypotheses Relationship between your variables can be: Other unknown variables may be responsible for the relationship
30 Direct Reverse Reciprocal Spurious Indirect Beginning to evaluate a study What outcomes are we looking at? What are we comparing? Assume these are correlated What other variables might be related? What type of relationship might they have? Can you measure these variables?
31 Low Internal Validity Finding multiple other variables or types of relationships that could be responsible for your results may mean you have low internal validity There is something else causing the results can't be certain about a causal conclusion
32 What could this "something else" be? Something happened between measurements at time 1 and time 2 Something about the groups you were comparing is already different These are called threats to internal validity 33 Threats to Internal Validity Time Threats: Group Threat Maturation History Instrumentation Test Reactivity Selection Group and Time Threats Mortality/Attrition Selection by Time Selective attrition Regression Towards the Mean
34 Maturation Time threat: something happens between measurements The subjects you are measuring are naturally changing What you're studying (program, intervention, variable of interest) is not causing the change it's just happening
35 History Time threat: something happens between measurements An event occurs between measurements that changes the score Natural disasters, political changes, major case decisions The major event causes the change in measurement, not what you are studying
36 Instrumentation Time threat: something happens between measurements Observers become more trained/better or they become fatigued/worse Physical measurements: calibration changes Measurement error in a predictable direction causes the change in score, not what you're studying
37 Test Reactivity Time threat: something happens between measurements The test you give subjects at the beginning affects their answers later Can happen because it clues them in to what you're studying or what your hypothesis is Change in scores is due to test reactivity, not what you're studying
38 Selection Group Threat: the subjects are causing the problem There is something different about your groups to start with Intervention/variable of interest might have an impact for one group but not the other Or initially different levels of the dependent variable mask results
39 Selection by time A special case of the selection threat You start with groups that are different on an important variable This difference gets larger or smaller over time Can either mask or exaggerate results 40 ...
View Full Document
- Summer '08