List and briefly describe the four ways you can improve measurement reliability

List and briefly describe the four ways you can

This preview shows page 16 - 18 out of 25 pages.

15. List and briefly describe the four ways you can improve measurement reliability. (lecture) 16. Be able to define the key terms for this chapter. (see Key Terms)
Image of page 16
Closed-ended (fixed-choice) question – a survey question that provides preformatted response choices for the respondent to circle or check Concept – a mental image that summarizes a set of similar observations, feelings, or ideas Conceptualization – the process of specifying what we mean by a term. In deductive research, conceptualization helps to translate portions of an abstract theory into testable hypotheses involving specific variables. In inductive research, conceptualization is an important part of the process used to make sense of related observations. Concurrent validity – the type of validity that exists when scores on a measure are closely related to scores on a criterion measured at the same time Constant – a number that has a fixed value in a given situation; a characteristic or value that does not change Construct validity – the type of validity that is established by showing that a measure is related to other measures as specified in a theory Content analysis – a research method for systematically analyzing and making inferences from text Content validity – the type of validity that exists when the full range of a concept’s meaning is covered by the measure Criterion validity – the type of validity that is established by comparing the scores obtained on the measure being validated to those obtained with a more direct or already validated measure Exhaustive – every case can be classified as having at least one attribute or value for the variable Face validity – the type of validity that exists when an inspection of items used to measure a concept suggests that they are appropriate “on their face” Index – a composite measure based on summing, averaging, or otherwise combining the responses to multiple questions that are intended to measure the same concept Interitem reliability or internal consistency – calculates reliability based on the correlation among multiple items used to measure a single concept Interobserver reliability – when similar measurements are obtained by different observers rating the same persons, places, or events
Image of page 17
Image of page 18

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture

  • Left Quote Icon

    Student Picture