Characteristics of Effective chapter 6

Characteristics of Effective chapter 6 - Characteristics of...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Characteristics of Effective Characteristics of Effective Selection Techniques Optimal Employee Selection Optimal Employee Selection Systems • Are Reliable • Are Valid – Based on a job analysis (content validity) – Predict work­related behavior (criterion validity) – – – – – – – Face valid Don’t invade privacy Don’t intentionally discriminate Minimize adverse impact Cost to purchase/create Cost to administer Cost to score • Reduce the Chance of a Legal Challenge • Are Cost Effective Reliability Reliability • The extent to which a score from a test is • consistent and free from errors of measurement Methods of Determining Reliability Test­retest (temporal stability) Alternate forms (form stability) Internal reliability (item stability) Scorer reliability – – – – • Measures temporal stability • Administration – – – Same applicants Same test Two testing periods Test­Retest Reliability Test­Retest Reliability • Scores at time one are correlated with scores at time two • Correlation should be above .70 Test­Retest Reliability Test­Retest Reliability Problems errors • Sources of measurement – Characteristic or attribute being measured may change over time – Reactivity – Carry over effects – – – Time consuming Expensive Inappropriate for some types of tests • Practical problems Alternate Forms Reliability Alternate Forms Reliability Administration • Two forms of the same test are developed, and to the highest degree possible, are equivalent in terms of content, response process, and statistical characteristics One form is administered to examinees, and at some later date, the same examinees take the second form • Alternate Forms Reliability Alternate Forms Reliability Scoring • Scores from the first form of • test are correlated with scores from the second form If the scores are highly correlated, the test has form stability Alternate Forms Reliability Alternate Forms Reliability Disadvantages • Difficult to develop • Content sampling errors • Time sampling errors Internal Reliability Internal Reliability • Defines measurement error strictly in terms of consistency or inconsistency in the content of the test. • Used when it is impractical to administer two separate forms of a test. • With this form of reliability the test is administered only once and measures item stability. Determining Internal Reliability Determining Internal Reliability • Split­Half method (most common) – Test items are divided into two equal parts – Scores for the two parts are correlated to get a measure of internal reliability. (2 x split half reliability) ÷ (1 + split­half reliability) • Spearman­Brown prophecy formula: Spearman­Brown Formula Spearman­Brown Formula (2 x split­half correlation) (1 + split­half correlation) If we have a split­half correlation of . 60, the corrected reliability would be: (2 x .60) ÷ (1 + .60) = 1.2 ÷ 1.6 = .75 Common Methods for Common Methods for Correlating Split­half Methods • Cronbach’s Coefficient Alpha – Used with ratio or interval data. • Kuder­Richardson Formula – Used for test with dichotomous items (yes­no true­ false) Interrater Reliability Interrater Reliability • Used when human judgment of performance is • involved in the selection process Refers to the degree of agreement between 2 or more raters Rate the Waiter’s Performance (Office Space – DVD segment 3) Reliability: Conclusions Reliability: Conclusions • The higher the reliability of a selection test the better. Reliability should be .70 or higher • Reliability can be affected by many factors • If a selection test is not reliable, it is useless as a tool for selecting individuals Validity Validity • Definition The degree to which inferences from scores on tests or assessments are justified by the evidence Common Ways to Measure – – – Content Validity Criterion Validity Construct Validity • Content Validity Content Validity • The extent to which test items sample the content that they are supposed to measure • In industry the appropriate content of a test of test battery is determined by a job analysis Criterion Validity Criterion Validity • Criterion validity refers to the extent to which a • test score is related to some measure of job performance called a criterion Established using one of the following research designs: – Concurrent Validity – Predictive Validity – Validity Generalization Concurrent Validity Concurrent Validity • Uses current employees • Range restriction can be a problem Predictive Validity Predictive Validity • • • Correlates test scores with future behavior Reduces the problem of range restriction May not be practical Validity Generalization Validity Generalization • Validity Generalization is the extent to which a • test found valid for a job in one location is valid for the same job in a different location The key to establishing validity generalization is meta­analysis and job analysis Typical Corrected Validity Coefficients for Typical Corrected Validity Coefficients for Selection Techniques Method Structured Interview Cognitive ability Job knowledge Work samples Assessment centers Biodata Integrity tests Situational judgment Validity .57 .51 .48 .39 .38 .34 .34 .34 Method College grades References Experience Conscientiousness Unstructured interviews Interest inventories Handwriting analysis Projective personality tests Validity .32 .29 .27 .24 .20 .10 .02 .00 Construct Validity Construct Validity • The extent to which a test actually measures the construct that it purports to measure • Is concerned with inferences about test scores • Determined by correlating scores on a test with scores from other test Face Validity Face Validity • The extent to which a test appears to be job • • related Reduces the chance of legal challenge Increasing face validity Locating Test Information Locating Test Information Exercise 6.1 Utility Utility The degree to which a selection device improves the quality of a personnel system, above and beyond what would have occurred had the instrument not been used. Selection Works Best When... Selection Works Best When... • • You have many job openings You have many more applicants than openings • You have a valid test • The job in question has a high salary • The job is not easily performed or easily trained Common Utility Methods Common Utility Methods Taylor­Russell Tables Proportion of Correct Decisions The Brogden­Cronbach­Gleser Model Utility Analysis Utility Analysis Taylor­Russell Tables • Estimates the percentage of future employees that will be successful • Three components – Validity – Base rate (successful employees ÷ total employees) – Selection ratio (hired ÷ applicants) Taylor­Russell Example Taylor­Russell Example • Suppose we have – – – a test validity of .40 a selection ratio of .30 a base rate of .50 • Using the Taylor­Russell Tables what percentage of future employees would be successful? r. .05 .10 .20 .30 .40 .50 .60 .70 .80 .90 .95 50% .00 .10 .20 .30 .40 .50 .60 .70 .80 .90 .50 .58 .67 .74 .82 .88 .94 .98 1.0 1.0 .50 .57 .64 .71 .78 .84 .90 .95 .99 1.0 .50 .50 .55 .59 .64 .69 .74 .79 .85 .90 .97 .50 .50 .50 .53 .55 .58 .61 .63 .66 .70 .73 .78 .50 .50 .50 .51 .52 .52 .53 .54 .54 .55 .55 .56 .50 .56 .61 .67 .73 .76 .84 .90 .95 .99 .54 .58 .62 .66 .70 .75 .80 .85 .92 .53 .56 .60 .63 .67 .70 .75 .80 .86 .52 .54 .56 .58 .60 .62 .65 .67 .70 .51 .53 .54 .56 .57 .59 .60 .61 .62 .50 .51 .51 .52 .52 .52 .53 .53 .53 Proportion of Correct Decisions Proportion of Correct Decisions • Proportion of Correct Decisions With Test (Correct rejections + correct acceptances) ÷ Total employees Quadrant II Quadrant IV Quadrants I+II+III+IV • Baseline of Correct Decisions Successful employees ÷ Total employees Quadrants I + II Quadrants I+II+III+IV 10 9 C r i t e r i o n 8 7 6 5 4 3 2 1 I x x x II x x x x x x x x x x x x x x x x x x 1 2 3 4 5 6 x x x x x x IV III x x x 7 8 9 10 Test Score (x) Proportion of Correct Decisions Proportion of Correct Decisions • Proportion of Correct Decisions With Test ( 10 + 11 ) ÷ (5 + 10 + 4 + 11) Quadrant II Quadrant IV Quadrants I+II+III+IV = 21 ÷ 30 = .70 • Baseline of Correct Decisions Quadrants I + II 5 + 10 ÷ 5 + 10 + 4 + 11 Quadrants I+II+III+IV = 15 ÷ 30 = .50 Computing the Proportion of Correct Computing the Proportion of Correct Decisions Exercise 6.3 9 8 7 6 5 4 3 2 1 x x x x x x x x x x x x x x x x I x II x x IV 1 2 x 3 4 5 III 6 7 8 9 Test Scores Answer to Exercise 6.3 Answer to Exercise 6.3 • Proportion of Correct Decisions With Test ( 8 + 6 ) ÷ (4 + 8 + 6 + 2) Quadrant II Quadrant IV Quadrants I+II+III+IV = 14 ÷ 20 = .70 • Baseline of Correct Decisions 4 + 8 ÷ 4 + 8 + 6 + 2 Quadrants I + II Quadrants I+II+III+IV = 12 ÷ 20 = .60 Brogden­Cronbach­Gleser Brogden­Cronbach­Gleser Utility Formula • Gives an estimate of utility by estimating the amount of money an organization would save if it used the test to select employees. Savings =(n) (t) (r) (SDy) (m) ­ cost of testing n= Number of employees hired per year t= average tenure r= test validity SDy=standard deviation of performance in dollars m=mean standardized predictor score of selected applicants • • • • • Components of Utility Components of Utility Selection ratio The ratio between the number of openings to the number of applicants Validity coefficient Base rate of current performance The percentage of employees currently on the job who are considered successful. SDy The difference in performance (measured in dollars) between a good and average worker (workers one Calculating m Calculating • For example, we administer a test of mental ability to a group of 100 applicants and hire the 10 with the highest scores. The average score of the 10 hired applicants was 34.6, the average test score of the other 90 applicants was 28.4, and the standard deviation of all test scores was 8.3. The desired figure would be: • (34.6 ­ 28.4) ÷ 8.3 = 6.2 ÷ 8.3 = ? Calculating m Calculating • You administer a test of mental ability to a group of 150 applicants, and hire 35 with the highest scores. The average score of the 35 hired applicants was 35.7, the average test score of the other 115 applicants was 24.6, and the standard deviation of all test scores was 11.2. The desired figure would be: – (35.7 ­ 24.6) 11.2 = ? 11.2 = ? Standardized Selection Ratio Standardized Selection Ratio SR 1.00 .90 .80 .70 .60 .50 .40 .30 .20 .10 .05 m .00 .20 .35 .50 .64 .80 .97 1.17 1.40 1.76 2.08 Example Example – Suppose: • we hire 10 auditors per year • the average person in this position stays 2 • • • – Our utility would be: years the validity coefficient is .40 the average annual salary for the position is $30,000 we have 50 applicants for ten openings. (10 x 2 x .40 x $12,000 x 1.40) – (50 x 10) = $133,900 Exercise 6.2: Utility 1. Selection Ratio Base rate Validity % of future successful employees .40 .70 .35 .80 (round r down) .83 (round r up) Selection Ratio r. .05 .10 .20 .30 .40 .50 .60 .70 .80 .90 .95 70% .00 .10 .20 .30 .40 .50 .60 .70 .80 .90 .70 .77 .83 .88 .93 .96 .98 1.0 1.0 1.0 .70 .76 .81 .86 .91 .94 .97 .99 1.0 1.0 .70 .70 .74 .78 .82 .85 .89 .92 .96 .98 1.0 .70 .70 .70 .72 .75 .77 .79 .82 .85 .88 .91 .95 .70 .70 .70 .71 .73 .74 .75 .77 .79 .80 .82 .85 .70 .75 .79 .84 .88 .91 .95 .97 .99 1.0 .73 .77 .80 .83 .87 .90 .93 .97 .99 .73 .76 .78 .81 .84 .87 .91 .94 .98 .72 .74 .75 .77 .80 .82 .84 .87 .91 .71 .73 .74 .75 .77 .79 .80 .82 .85 .70 .71 .71 .72 .72 .73 .73 .73 .74 2. Answer: Current Test 2. Answer: Current Test – Components: • We will hire 200 people • The average person in this position stays 4 • • • – Our utility would be: years The validity coefficient is .25 The average annual salary for the position is $42,000 We have 500 applicants for 200 openings. (200 x 4 x .25 x $16,800 x .97) – (500 x 8) = $3,259,200 ­ $4,000 = 3. Answer: New Test 3. Answer: New Test – Components: • We will hire 200 people • The average person in this position stays 4 • • • – Our utility would be: years The validity coefficient is .35 The average annual salary for the position is $42,000 We have 500 applicants for 200 openings. (200 x 4 x .35 x $16,800 x .97) – (500 x 4) = $4,562,880 ­ $2,000 = 4. Savings Over Old Test 4. Savings Over Old Test Test New Test: Reilly Statistical Logic Test Old Test: Robson Math Savings Utility $4,560,880 $3,255,200 $1,305,680 Standardized Selection Ratio Standardized Selection Ratio SR 1.00 .90 .80 .70 .60 .50 .40 .30 .20 .10 .05 m .00 .20 .35 .50 .64 .80 .97 1.17 1.40 1.76 2.08 Typical Corrected Validity Coefficients for Typical Corrected Validity Coefficients for Selection Techniques Method Structured Interview Cognitive ability Job knowledge Work samples Assessment centers Biodata Integrity tests Situational judgment Validity .57 .51 .48 .39 .38 .34 .34 .34 Method College grades References Experience Conscientiousness Unstructured interviews Interest inventories Handwriting analysis Projective personality tests Validity .32 .29 .27 .24 .20 .10 .02 .00 1. Selection Ratio Base rate Validity % of future successful employees .40 .70 .57 .87 (round r down) .90 (round r up) Selection Ratio r. .05 .10 .20 .30 .40 .50 .60 .70 .80 .90 .95 70% .00 .10 .20 .30 .40 .50 .60 .70 .80 .90 .70 .77 .83 .88 .93 .96 .98 1.0 1.0 1.0 .70 .76 .81 .86 .91 .94 .97 .99 1.0 1.0 .70 .70 .74 .78 .82 .85 .89 .92 .96 .98 1.0 .70 .70 .70 .72 .75 .77 .79 .82 .85 .88 .91 .95 .70 .70 .70 .71 .73 .74 .75 .77 .79 .80 .82 .85 .70 .75 .79 .84 .88 .91 .95 .97 .99 1.0 .73 .77 .80 .83 .87 .90 .93 .97 .99 .73 .76 .78 .81 .84 .87 .91 .94 .98 .72 .74 .75 .77 .80 .82 .84 .87 .91 .71 .73 .74 .75 .77 .79 .80 .82 .85 .70 .71 .71 .72 .72 .73 .73 .73 .74 2. Answer: Unstructured 2. Answer: Unstructured Interview – Components: • We will hire 200 people • The average person in this position stays 4 • • • – Our utility would be: years The validity coefficient is .20 The average annual salary for the position is $42,000 We have 500 applicants for 200 openings. (200 x 4 x .20 x $16,800 x .97) – (500 x 15) = $2,607,360 ­ $7,500 = 3. Answer: Structured 3. Answer: Structured Interview – Components: • We will hire 200 people • The average person in this position stays 4 • • • – Our utility would be: years The validity coefficient is .35 The average annual salary for the position is $42,000 We have 500 applicants for 200 openings. (200 x 4 x .57 x $16,800 x .97) – (500 x 15) = $7,430,976 ­ $7,500 = 4. Savings Over Old Test 4. Savings Over Old Test Test New Test: Structured Interview Old Test: Unstructured Interview Savings Utility $7,423,476 $2,590,860 $4,832,616 Occurs when the selection rate for one group is less than 80% of the rate for the highest scoring group Number of applicants Number hired Selection ratio Male 50 20 .40 Female 30 10 .33 Adverse Impact Adverse Impact .33/.40 = .83 > .80 (no adverse impact) Adverse Impact ­ Example 2 Adverse Impact ­ Example 2 Female Number of applicants 20 Number hired Selection ratio Male 40 20 .50 4 .20 .20/.50 = .40 < .80 (adverse impact) 1. Compute Standard Deviation Standard Deviation Method Standard Deviation Method f e m a le a p p lic a n ts m a le a p p li c a n ts x x t o ta l h ire d to ta l a p p l ic a n ts to ta l a p p l ic a n ts 2. 3. Multiply standard deviation by 2 Compute expected number of females to be hired (female applicants/total applicants) x total hired 4. Compute confidence interval (expected ± 2 SD) 5. Determine if number of females hired falls 1. Compute Standard Deviation Standard Deviation Example Standard Deviation Example 10 40 x x 2 0 = . 2 0 x .8 0 x 2 0 = 3 .2 = 1 .7 9 50 50 Multiply standard deviation by 2 = 1.79 * 2 = 2. 3. 3.58 Compute expected number of females to be hired (10/50) x 20 = .2 x 20 = 4 4. Compute confidence interval (.42 ← 4 → 7.58) 5. Determine if number of females hired falls within Other Fairness Issues Other Fairness Issues • Single­Group Validity • Differential Validity – Test predicts for one group but not another – Very rare – Test predicts for both groups but better for one – Also very rare Linear Approaches to Making Linear Approaches to Making the Selection Decision • • • Unadjusted Top­down Selection Passing Scores Banding The Top­Down Approach The Top­Down Approach Who will perform the best? A “performance first” hiring formula Applicant Drew Eric Lenny Omar Mia Morris Sex M M M M F M Test Score 99 98 91 90 88 87 Top­Down Selection Top­Down Selection Advantages • Higher quality of selected applicants • Objective decision making Disadvantages • Less flexibility in decision making • Adverse impact = less workforce diversity • Ignores measurement error • Assumes test score accounts for all the variance in performance (Zedeck, Cascio, Goldstein & Outtz, 1996). The Passing Scores Approach The Passing Scores Approach Who will perform at an acceptable level? A passing score is a point in a distribution of scores that distinguishes acceptable from unacceptable performance (Kane, 1994). Uniform Guidelines (1978) Section 5H: Passing scores should be reasonable and consistent with expectations of acceptable proficiency Passing Scores Passing Scores Applican t Omar Eric Mia Morris Tammy Drew Sex Score M M F M F M 98 80 70 (passing score) 69 58 40 Passing Scores Passing Scores Advantages groups • Increased flexibility in decision making • Less adverse impact against protected Disadvantages • Lowered utility • Can be difficult to set Five Categories of Banding Five Categories of Banding • • • • • Top­down (most inflexibility) Rules of “Three” or “Five” Traditional banding Expectancy bands SEM banding (standard error of measurement) • Testing differences between scores for statistical significance. • Pass/Fail bands (most flexibility) Top­Down Banding Top­Down Banding Applicant Drew Eric Lenny Omar Mia Morris Sex M M M M F M Test Score 99 98 91 90 88 87 Rules of “Three” or “Five” Rules of “Three” or “Five” Applicant Drew Eric Lenny Omar Jerry Morris Sex M M M M F M Test Score 99 98 91 90 88 87 Traditional Bands Traditional Bands • Based on expert judgment • Administrative ease • e.g. college grading system • e.g. level of job qualifications Expectancy Bands Expectancy Bands Band A B C D Test Score 522 – 574 483 – 521 419 – 482 0 – 418 Probability 85% 75% 66% 56% SEM Bands SEM Bands “Ranges of Indifference” • A compromise between • the top­down and passing scores approach. It takes into account that tests are not perfectly reliable (error). SEM Banding SEM Banding • Compromise between top­down selection and passing • • scores Based on the concept of the standard error of measurement To compute you need the standard deviation and reliability of the test 1 − reliability SD Standard error = • Band is established by multiplying 1.96 times the standard error Applicant Applicant Armstrong Glenn Grissom hired Aldren Ride Irwin Carpenter Gibson McAuliffe Carr Teshkova Jamison Pogue Resnick Anders Borman Lovell Slayton Kubasov Sex m m m f m m m f m f m f m m m m Score 99 98 m 92 88 87 84 80 75 72 70 m 64 61 60 58 57 55 f Band 1 x x 94 x Band 2 hired x x x Band 3 hired hired x x x x Band 4 hired hired x x hired x x 65 53 Applicant Applicant Band 5 Clancy King Koontz Follot Saunders x Crichton Sanford Dixon Wolfe Grisham Clussler Turow Cornwell Clark Brown Sex m m m m m m m m m m m f f f 97 95 94 92 m 87 84 80 75 72 70 65 64 61 60 Score x x x x 88 Band 1 Band 2 Band 3 Band 4 hired x x x x hired x x x x x hired x x x x hired x x x x 12.8 1 − .90 12.8 .10 = 12.8 * .316 = 4.04 Band = 4.04 * 1.96 = 7.92 ~ 8 Types of SEM Bands Types of SEM Bands Fixed Sliding Diversity­based • Females and minorities are given preference when selecting from within a band. Pass or Fail Bands Pass or Fail Bands (just two bands) Applican Sex t Omar M Eric Mia Morris Tammy Drew M F M F M Score 98 80 70 (cutoff) 69 58 40 Advantages of Banding Advantages of Banding • Helps reduce adverse impact, increase • workforce diversity,and increase perceptions of fairness (Zedeck et al., 1996). Allows you to consider secondary criteria relevant to the job (Campion et al., 2001). Disadvantages of Banding Disadvantages of Banding (Campion et al., 2001) • Lose valuable • • • information Lower the quality of people selected Sliding bands may be difficult to apply in the private sector Banding without minority preference may not reduce adverse impact Factors to Consider When Factors to Consider When Deciding the Width of a Band • • • (Campion et. al, 2001) Narrow bands are preferred Consequences or errors in selection Criterion space covered by selection device • Reliability of selection device • Validity evidence • Diversity issues Legal Issues in Banding Legal Issues in Banding (Campion et al., 2001). Banding has generally been approved by the courts • Bridgeport Guardians v. City of Bridgeport, 1991 • Chicago Firefighters Union Local No.2 v. City of Chicago, 1999 • Officers for Justice v. Civil Service Commission, 1992 Minority Preference What the Organization Should What the Organization Should do to Protect Itself • The company should have established rules and procedures for making choices within a band Applicants should be informed about the use and logic behind banding in addition to company values and objectives (Campion et al., 2001). • Banding Example Banding Example • Sample Test Information • The Band • The Standard Error SD 1 − reliability – Reliability = .80 – Mean = 72.85 – Standard deviation = 9.1 Band = Standard error * 1.96 Band = 4.07 * 1.96 = 7.98 ~ 8 – – We have four openings We would like to hire more females • Example 1 9.1 1 − .80 9.1 .20 = 9.1 * .447 = 4.07 • Example 2 – Reliability = .90 – Standard deviation = 12.8 Using Banding to Reduce Adverse Impact Using Banding to Reduce Adverse Impact Exercise 6.4 1. Standard Error 2. Band McCoy Robinette 4. Hire using sliding band Carmichael Kincaid 3.06 6 points Stone Schiff McCoy Stone McCoy Stone 3. Hire using nonsliding band 5. Hire using a passing score of 80 Carmichael Kincaid Applicant Applicant Band 5 McCoy Stone Robinette x Schiff Carmichael Carver Kincaid Rabb Ross Cabot Matlock Lewin Mason Donnell Berlutti Sex m m m f m f m f f m f m m m 97 95 m 94 91 89 89 88 87 86 86 85 83 80 78 Score x x 94 x x Band 1 Band 2 Band 3 Band 4 x x x x hire hire x x x hired hired x x x hired x hire hired hire x hired x x hired 7.43 1 − .83 7.43 .17 = 7.43 * .412 = 3.06 Band = 3.06 * 1.96 = 5.99 ~ 6 Should the top scorers on a test always get the job? ...
View Full Document

Ask a homework question - tutors are online