{[ promptMessage ]}

Bookmark it

{[ promptMessage ]}

Chp._4_Predictors_ 13 sep

# Chp._4_Predictors_ 13 sep - Any variable used to forecast a...

This preview shows pages 1–4. Sign up to view the full content.

1 Predictors: Psychological Assessments Any variable used to forecast a criterion. Why do we need Predictors in I/O? Selection is about prediction forecasting who is likely to succeed in jobs based on available data If we knew who was going to be a good performer, we wouldn’t need predictors Examples? Predictor variables are assessed in terms of: Reliability Validity Four different types: 1. Test-retest reliability o Stability of test scores upon repeated applications of the test o Uses a coefficient of stability-- stability of the test over time o The higher, the better (hopefully > .70) IQ (time 1) IQ (time 2) 2. Equivalent-form reliability o Equivalence of test scores between two versions of a test o Uses a coefficient of equivalence extent to which two test forms measure the same concept Exam 1 (Form A) Exam 1 (Form B) 3. Internal-consistency reliability o Analysis of the homogeneity of a test o Two ways to compute: o Split half reliability o Cronbach’s alpha o Correlate response to each item with response to every other item. o The higher, the better (hopefully > .70) Odd numbers Even Numbers

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
2 4. Inter-rater reliability o Agreement between two or more raters o Example: multiple interviewers evaluating job candidates o Uses intraclass correlation coefficient Rater 1 Rater 2 Four different ways to assess validity: 1. Construct-related validity o Degree to which a test is an accurate measure of the construct it purports to measure o Construct s conceptual abstractions (e.g., intelligence, leadership, motivation) o How valid are tests that measure these constructs? o 2 ways to measure: o Convergent validity coefficients o Divergent validity coefficients 2. Criterion-related validity o Degree to which a test is statistically related to a criterion o 2 major types: a) Concurrent current status b) Predictive future status o Assessed with validity coefficient Predictor (e.g., High School Class Rank) Predict Current GPA Predict Final College GPA a) b) Four different ways to assess validity: 3. Content validity o Degree to which subject matter experts (SMEs) agree that test items represent domains of knowledge that test purports to measure o Example: achievement tests 4. Face validity o Appearance that items in a test are appropriate for the intended use by those taking the test 1) Construct-related validity: How well measure represents theoretical construct Convergent and divergent validity coefficients 2) Criterion-related validity: How well predictor relates to criterion Predictive and concurrent validity coefficients 3) Content validity: How well measure covers representative sample of behavior/situation being assessed SMEs assess this type of validity 4) Face validity: How well test taker thinks items look ―right‖ for test Subjective assessment made by test taker You can have reliability without validity If you have validity, you usually have reliability Reliability sets the upper limit on validity
3

This preview has intentionally blurred sections. Sign up to view the full version.

View Full Document
This is the end of the preview. Sign up to access the rest of the document.

{[ snackBarMessage ]}