Chapter_3 - PSYC 310 Chapter 3 Defining& Measuring...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: PSYC 310 - Chapter 3 Defining & Measuring Variables Why measure? comparison classification prediction program evaluation decision-making diagnosis Variables Directly observable: Inferred states: Constructs Presumed unobservable internal mechanisms that account for externally observed behaviour. Change in environment Observable behaviour e.g., anxiety, self-esteem, motivation, aggression, intelligence. Constructs ABSTRACT Unobservable Stress CONCRETE Observable Heart rate Perspiration Error rate Operational Definitions Precise description of what you will measure, how you will measure it, and when you will measure it (testing protocol). Defines the "operations" that allow us to confidently link the unobservable construct with the observable behavior. Operational Definitions Converts an abstract entity into a concrete variable that can be directly observed and measured. Good operational definitions: clear & precise (for tester and others) make replication possible Operational Definitions CAREFUL! Operational definition is not the same as the construct. ?????? ?????? Operational Definitions Operational definition describes the measurement procedures. Must meet two important criteria: VALIDITY: accuracy of the measure RELIABILITY: consistency of the measure Validity A valid test will measure what it is supposed to measure: There are 5 types of validity: (1) Face validity (2) Predictive validity (3) Concurrent validity (4) Internal validity (5) External validity Face Validity The extent to which the measurement appears at first glance to be a plausible measure of the variable. Does it make intuitive sense? Would others agree? Ex: measuring shyness Face Validity IQ test: logic, reasoning, math, grammar. Concurrent Validity The extent to which a measure relates to other existing measures of the same thing (construct). Established test CORRELATION? New test Low correlation = measure may not be capturing the construct in a valid manner. Predictive Validity The strength of the relationship between two variables (correlation). Can you use one to predict the other? Predictor Optimism Neuroticism Intelligence Criterion Perseverance Stress GPA Internal Validity You can safely say that the changes in X have caused the observed changes in Y. Internal validity depends on appropriate control of other variables (confounding factors). 4-week yoga program stress External Validity The extent to which your results can generalize to other settings and populations. If the effect remains, even with different groups, then your test has good external validity. Reliability The consistency of a measure over repeated applications under the same conditions. How reproducible are the results? Does your scale give the same reading every time you step on it? Measurement is always variable (error, lack of precision) but should be as little as possible. Types of Reliability Test-retest reliability: Repeat same measurement and calculate correlation between scores (reliability coefficient) Inter-rater reliability: Compare the scores from the two raters and calculate correlation. Test-Retest Reliability High Reliability 80 60 40 40 60 80 Me ureat tim #1 as e Inter-rater Reliability Low re liability 14 12 10 8 6 4 2 0 0 2 4 6 8 10 12 14 16 18 Rate #1 r Types of Reliability Split-half reliability: Typically, for clinical scales and questionnaires (internal consistency) Take the scores from half the items and correlate them with the scores from the remaining half should correlate highly. Measurement Error Observed scores may not be a true reflection of the variable or construct being measured - there is always some degree of error present. Observed score = true score + error Sources of Error Measurement error can come from four sources: 1. 2. 3. 4. The participant The instrumentation / apparatus The testing environment Scoring guidelines The Participant mood motivation fatigue health memory practice knowledge ability Instrumentation/Apparatus sensitivity clarity of instructions appropriateness length vocabulary intrusiveness Testing Environment comfort (room too hot? too cold?) presence of others (social facilitation) distractions (noise, interruptions) Scoring Guidelines clear? easy to follow? complex? experience required? individual differences? Reducing Error To reduce error as much as possible, one must try to minimize the effects of possible confounds. This is achieved through "standardization" of: participants the test protocol the environment scoring procedures Standardizing Participants You must decide ahead of time what your inclusion and exclusion criteria will be. age gender educational level health status ethnicity Standardizing Test Protocol Variations on the testing protocol can affect the results need to remain consistent. instructions to participants treatment of participants administration of tests/measures order of tests/measures Standardizing the Environment Choose the best environment that is conducive to testing and try to repeat it in the future. Be sure to take note of factors such as: time of day day of the week time of year temperature noise level accessibility Standardizing Scoring Scoring should be as objective as possible. Marking criteria should be as clear and precise as possible. For both participants and raters. Standardizing Scoring Do a few practice runs to become familiar with scoring procedures esp. when many raters used. Allow participants some practice sessions prior to recording scores. Reliability Coefficient You can never completely eliminate error, therefore you must account for it. S2 true S2 observed S2 observed S2 error S2 observed r= = The reliability coefficient is the ratio of true score variance to observed score variance. Reliability Coefficient The reliability coefficient reflects the degree to which the measurement is free of error variance. Acceptable reliability score = .80 + above. Validity & Reliability A measurement procedure must be reliable (consistent) in order to be valid. A measurement procedure can be reliable & not necessarily valid. Types of Measurement Can be either: qualitative quantitative Quantitative Measures In quantitative assessment, we are assigning a numerical value to a variable. A variable is anything that can have different values (minimum of 2 values). Depending on the properties of the variable, different scales of measurement can be used. Nominal Scale Used for categorization, no inherent order. gender, group, age, education Ordinal Scale Used for ranking, has order + magnitude height = short, medium, tall strength = weak, strong Interval Scale Variables have order + magnitude + equal intervals between values, but NO true zero point. Ratio Scale Variables have order + magnitude + equal intervals between values, but with a true zero point. distance, length, reaction time. Measurement Scales Names Order Nominal Ordinal Interval Ratio X X X X X X X X X X Eq intervals True zero Modalities of Measurement self-report measures: - direct but subjective - social desirability physiological measures: - objective but invasive - costly, time-consuming Modalities of Measurement behavioural measures: - interpretation - clusters better ...
View Full Document

This note was uploaded on 04/29/2010 for the course PSYCH 310 taught by Professor T.bianco during the Winter '10 term at Concordia Canada.

Ask a homework question - tutors are online