Unformatted text preview: Quantitative Research Design II
CJ 300 Dr. Kierkus February 2011 Cause and Effect
Sometimes, one will observe a relationship (or correlation) between two variables, but this does not necessarily mean that they are causally linked. For instance, ice cream sales is correlated with sexual assault. However, banning ice cream sales will not decrease sexual assault! To establish a true causal effect one must establish 3 things: Association: x must be correlated with y. Temporal ordering: x must happen before y. No alternative explanations (lack of spuriousness): a third variable z, must not cause changes in both x and y. Eliminating Alternatives
When doing research we try to control for plausible alternatives. That is, to make sure it's really our independent variable that's influencing our dependent variable. There are two ways to do this: Statistically: define and measure all possible confounding variables. With research design: experimental designs that eliminate confounding variables. Using a research design is better, but often impractical in criminology. Types of Causal Relationships
Positive relationship: As the level of the cause goes up, so does the level of the effect. Exposure to violent television / violent crime. Negative relationship: As the level of the cause goes up, the level of the effect goes down. Parental supervision / delinquency. Reciprocal relationship: "Vicious cycle" relationships. These are the most difficult to model. Delinquent peers / delinquent behavior. Operationalization
Once you have developed a testable hypothesis, the next step is to turn the concepts in the hypothesis into variables we can measure. This process is called operationalization. Before we can operationalize a variable we must write a conceptual definition. This is like a "dictionary definition". It should be as explicit as possible! Take nothing for granted! Operationalization Example
Sample Hypothesis: Individuals with lower levels of impulse control are more likely to commit violent crime than those with higher levels of impulse control. Our key concepts are: Impulse control. Violent crime. We must first come up with conceptual definitions for these terms. We may not always agree on the definitions, but if they're explicit, we'll know what a particular researcher is testing. Operationalization
Once we've developed the conceptual definitions we must turn them into operational definitions. These definitions explain exactly how we intend to measure our concepts. In survey research, we will use survey questions to do our measurement. In other types of research, an operational definition might be a procedure for observing something, or a method of recording data. Don't take this for granted either! Come up with a good operational definition for how one can tell if someone is male or female. Operationalization
Fortunately, operational definitions for many key concepts in criminology / criminal justice already exist. It's usually easier to use existing operational definitions than to create your own. The critical thing about creating conceptual and operational definitions is that they match. When we do empirical research, we test our hypotheses in terms of our operational definitions. If they don't match the conceptual definitions we have a serious problem! If our study "fails" it may be because of poor measurement, not because the theory is flawed. Reliability and Validity
So how can one tell if operationalizations are any "good"? "Goodness" is assessed based on two key criteria: Validity: whether our operationalizations actually measure what they claim to measure. Reliability: whether we will obtain the same results in repeated measuring. Assessing Validity
There are several ways we can assess if a measure is valid. Face validity: subjective evaluation made by experts. Content validity: does the measure capture the entire concept? Criterion validity: you use a set, established criterion to validate your proposed measure.
For example, if the concept is "strength" using a person's maximum bench press may have poor content validity. For instance, people who score well on the SAT tests should do well in university, if they don't, there is a criterion validity problem with the SAT (i.e. it isn't a valid measure of scholastic aptitude). Assessing Validity
Internal validity: exists if all of the rules for good quantitative research have been followed. For example, failing to write a good conceptual definition prior to creating operational definitions could harm internal validity. External validity: exists if the results of a given study apply outside that research setting. For example, if you conduct a study in Grand Rapids and find that religiosity is strongly associated with crime, this finding might not apply elsewhere. This would constitute an external validity problem. There is often a trade off between internal and external validity. Assessing Reliability
Testretest reliability: Does the measure yield the same results in repeated tests? For instance, does a scale show the same number each time you step on it? For example, is IQ a fair test of intelligence for people of all ethnic groups and social classes? Representative reliability: Does the measure yield the same results in different groups of subjects? Equivalence reliability: Different measures of the same concept should yield the same results. For instance, we could assess strength by using the maximum amount someone can lift on a variety of different weight training exercises. If it turns out that people who do really well on one measure (say, bench press) don't necessarily do well on another (say, leg press) this suggests we may actually be dealing with two different concepts (i.e. upper body strength and lower body strength). Validity and Reliability: A Summary
A good way to understand validity and reliability is by way of the bulls eye example: All shots hit the bulls eye: good reliability and good validity. All shots in a tight group outside the bulls eye: good reliability, but poor validity. Shots in a loose group centered on the bulls eye: acceptable validity, but poor reliability. Shots all over the place: poor reliability and validity. A measure can be perfectly reliable, but completely invalid. Poor reliability hurts validity. Scales of Measurement
For the purposes of quantitative research we want to measure variables at the highest scale of measurement possible. We can convert a higher order measure to a lower order one, but we can't do the opposite. The higher the scale of measurement, the more powerful the statistical analysis one can do. There are 4 basic scales of measurement: Scales of Measurement
Nominal: Qualitative variables with number codes assigned to them (e.g. religion). The numbers are arbitrary. Ordinal: One can rank the responses in a meaningful way (i.e. the numbers are not arbitrary). However, one doesn't know if the distance between rankings is equal. You can't perform arithmetic operations on these variables Consider a variable that measures your opinion of your CJ 300 professor on a scale of 1 to 5, 5 being the best.
How can you be sure the difference between 1 and 2 is the same as the one between 4 and 5? Scales of Measurement (continued)
Interval: One can rank order the responses, and additionally, one knows that the distances between ranked categories are equal. This enables you to add and subtract values and get meaningful results. Ratio: Categories can be ranked, the distances between rankings are equal, and the zero point of the variable is not arbitrary. This enables you to add, subtract, multiply and divide values and get meaningful results. These distinctions are tough to appreciate without some examples: Example to Illustrate Scales of Measurement
Degree Fahrenheit and degree Celsius scales are interval (not ratio). You can say that 70 degrees F is 35 degrees hotter than 35 degrees F (7035 = 35) but you can't draw the conclusion that it is twice a hot (70/35 = 2). Notice what happens when we look at the same temperature on the Celsius scale 70 = 21, 35 = 2. Therefore the difference between these same temperatures now appears to be 21/2 ~ 10 fold! This occurs because the zero point on both scales is arbitrary. It doesn't really mean zero (a total absence of heat). Example to illustrate scales of measurement
On the other hand, when you measure speed using either miles per hour (MPH) or kilometers per hour (KPH) you are using ratio scale variables. Zero means zero (i.e. total absence of motion). All four arithmetic operations will make sense. A car traveling 300KPH is moving twice as fast as one traveling at 150KPH (300 / 150 = 2). If we convert to miles per hour we draw the same conclusion: 300KPH = 186MPH, 150KPH = 93MPH. A car traveling at 186MPH is moving twice as fast as one traveling 93MPH (186 / 93 =2). Why Should we Care about Scales of Measurement?
Simple arithmetic is not the only thing that depends on the scale of measurement. When you study statistics, you learn that specific statistical tests are only appropriate if the data are measured at a particular scale of measurement. If you use the wrong statistical test, your conclusions can be completely wrong! For instance, think about what will happen if we assign a numerical code for eye color to everyone in the class (a nominal scale measure) and then try to compute the average (mean) eye color of the class (a statistical procedure that assumes at least interval level data)? Our results will be meaningless because the mean is a statistic that requires at least interval level data. If we want a measure of central tendency for nominal level data we must use the mode. Similarly, chisquare (x2) is appropriate for any level of data (like the mode), but ttests require at least interval level data (like the mean). Summary
Just because two variables are correlated, this doesn't mean there is a causal relationship between them! Before we can do quantitative research we must conceptually define, then operationalize, our variables. Validity and reliability are measures of how "good" our operationalizations are. When creating variables we should always try to create data on the highest scale of measurement possible. ...
View
Full Document
 Fall '10
 KIERKUS
 Level of measurement, bulls, operational definition

Click to edit the document details