halves of the sample have been administered slight different versions of the questions Interobserver reliability – when similar measurements are obtained by different observers rating the same persons, places, or events 15. List and briefly describe the four ways you can improve measurement reliability. (lecture) 1. Check the operational definition: Do you have all the theoretically relevant components of a construct? 2. Increase the level of measurement: (sometimes helps) Make the measure more precise. 3. Use multiple indicators of a variable Two or three measures of the same thing are better than one 4. Pretest the measure: Remove or exchange questions that produce inconsistent results 16. Be able to define the key terms for this chapter. (see Key Terms) Alternate-forms reliability – a procedure for testing the reliability of responses to survey questions in which subjects’ answers are compared after the subjects have been asked slightly different versions of the questions or when random selected halves of the sample have been administered slight different versions of the questions
Closed-ended (fixed-choice) question – a survey question that provides preformatted response choices for the respondent to circle or check Concept – a mental image that summarizes a set of similar observations, feelings, or ideas Conceptualization – the process of specifying what we mean by a term. In deductive research, conceptualization helps to translate portions of an abstract theory into testable hypotheses involving specific variables. In inductive research, conceptualization is an important part of the process used to make sense of related observations. Concurrent validity – the type of validity that exists when scores on a measure are closely related to scores on a criterion measured at the same time Constant – a number that has a fixed value in a given situation; a characteristic or value that does not change Construct validity – the type of validity that is established by showing that a measure is related to other measures as specified in a theory Content analysis – a research method for systematically analyzing and making inferences from text Content validity – the type of validity that exists when the full range of a concept’s meaning is covered by the measure Criterion validity – the type of validity that is established by comparing the scores obtained on the measure being validated to those obtained with a more direct or already validated measure Exhaustive – every case can be classified as having at least one attribute or value for the variable Face validity – the type of validity that exists when an inspection of items used to measure a concept suggests that they are appropriate “on their face” Index – a composite measure based on summing, averaging, or otherwise combining the responses to multiple questions that are intended to measure the same concept
You've reached the end of your free preview.
Want to read all 25 pages?
- Fall '08