443_train_evaluation - Training Training Evaluation...

Info iconThis preview shows page 1. Sign up to view the full content.

View Full Document Right Arrow Icon
This is the end of the preview. Sign up to access the rest of the document.

Unformatted text preview: Training Training Evaluation Training evaluation Training provides the data needed to demonstrate that training does provide benefits to the company. What are the differences among: Training effectiveness Training outcomes Training evaluation Evaluation design Types of Evaluation Types Formative Summative Why Evaluate Training Programs? Programs? Objectives Satisfaction Benefits Comparison Why Should A Training Program Be Evaluated? (1 of 2) (1 Program To identify the program’s To strengths and weaknesses strengths To assess whether content, To organization, and administration of the program contribute to learning and the use of training content on the Why Should A Training Program Be Evaluated? (2 of 2) (2 Program To gather data to assist in To marketing training programs marketing To determine the financial To benefits and costs of the programs programs To compare the costs and To benefits of training versus Objectives = Foundation Objectives Terminal behavior Conditions under which Conditions terminal behavior is expected terminal The standard below which The performance is unacceptable --> criteria by which the --> trainee is judged trainee Training Program Objectives and Their Implications for Objectiv Evaluation: e Evaluation: Outcom Reaction es Learnin g Did trainees like the Did s: program? program? Did the environment help Did learning? Cognitive learning? Pencil-and-paper tests Was material meaningful? : SkillPerformance on a work Performance Based: sample sample Transfer SkillBased: Ratings by peers or Ratings managers based on observation of behavior observation Affective Trainees’ motivation or job Trainees’ : attitudes attitudes Results: Did company benefit Results: Did through sales, quality, productivity, reduced accidents, and complaints? accidents, Performance on work Performance The Evaluation Process The Conduct a Needs Analysis Develop Measurable Learning Outcomes and Analyze Transfer of Training and Develop Outcome Measures Choose an Evaluation Strategy Plan and Execute the Evaluation Training Outcomes: Kirkpatrick’s Four-Level Framework of Evaluation Criteria Evaluation Focus Level Criteria 1 Reactions Trainee satisfaction; aka affective 2 Learning 3 Behavior 4 Results Acquisition of knowledge, skills, attitudes, Acquisition behavior; aka cognitive behavior; Improvement of behavior on the job; aka skills Business results achieved by trainees Training Outcomes Training Cognitive (Knowledge) Skills (Behaviors) Affect (Attitudes & Affect Motivation) Motivation) Reactions Results Which Outcome is Represented by the Evaluation Form for Your Training Module? Training Outcomes Used in Evaluating Training Programs: (1 of 4) (1 Training Cognitive Outcomes Affective Outcomes Skill-Based Outcomes Results Return on Investment Outcomes Used in Evaluating Training Programs: (2 of 4) (2 Training Cognitive Outcomes Determine the degree to which trainees are Determine familiar with the principles, facts, techniques, procedures, or processes emphasized in the training program emphasized Measure what knowledge trainees learned in Measure the program the Skill-Based Outcomes Assess the level of technical or motor skills Include acquisition or learning of skills and Include use of skills on the job use Outcomes Used in Evaluating Training Programs: (3 of 4) (3 Training Affective Outcomes Include attitudes and motivation Trainees’ perceptions of the Trainees’ program including the facilities, trainers, and content trainers, Results Determine the training program’s Determine payoff for the company payoff Outcomes Used in Evaluating Training Programs: (4 of 4) (4 Training Return on Investment (ROI) Comparing Comparing the training’s monetary benefits with the cost of the training of direct costs indirect costs benefits How do you know if your outcomes are good? outcomes Good training outcomes Good need to be: need Relevant Reliable Discriminative Practical Good Outcomes: Relevance Relevance Criteria relevance – the extent to which training programs are related to learned capabilities emphasized in the training program program Criterion contamination – extent that training outcomes measure inappropriate capabilities or are affected by extraneous conditions conditions Criterion deficiency – failure to measure training outcomes that were emphasized in the training objectives the Criterion deficiency, relevance, and contamination: contamination: Outcomes Identified by Needs Outcomes Outcomes Measured in Related to Assessment Training and Included Evaluation Objectives in Training Objectives Contamination Relevance Deficiency Good Outcomes (continued) (continued) Reliability – degree to which outcomes can be measured consistently over time time Discrimination – degree to which trainee’s performances on the outcome actually reflect true differences in performance differences Practicality – refers to the ease with which the outcomes measures can be collected collected 20% Outcomes Re sult s 40% Be ha vior 80% Cognit ive Re a c t ion Percentage of Courses Using Outcome Training Evaluation Practices Practices 79% 60% 38% 15% 0% 9% How do Fidelity & Motorola evaluate their training programs? training How do their measures of How success compare with those advocated by the text? advocated Evaluation Designs: Threats to Validity to Threats to validity refer to a factor that will lead one to question either: question The The believability of the study results (internal validity), or (internal The extent to which the The evaluation results are generalizable to other groups Threats to Validity Threats Threats To Threats To Threats Threats External Validity Internal Validity External Internal Reaction to Company Reaction pretest pretest Persons Reaction to Reaction Outcome Outcome evaluation evaluation Measures Measures Interaction of Interaction selection and Methods to Control for Threats to Validity to Pre- and PrePosttests Posttests Use of Comparison Use Groups Groups Random Random Assignment Assignment Evaluation Procedures Evaluation Utility Utility [(Ns)*(T)*(r)*(SDy)*(Zs)]-[(N)*(C)] [(Ns)*(T)*(r)*(SDy)*(Zs)]-[(N)*(C)] Ns = number of applicants selected T = tenure of selected group in years tenure r = correlation between predictor and job correlation performance (VALIDITY) performance SDy = standard deviation of job SDy performance performance Zs = average standard predictor score of Zs selected group selected N = number of applicants C = cost per applicant [(Nc)*(T)*(r)*(SDy)*(Zs)]-[(N)*(C)] [(Nc)*(T)*(r)*(SDy)*(Zs)]-[(N)*(C)] Nc = number of trainees who complete Nc program program T = duration of training benefit duration r = correlation between training criterion and correlation training job performance (VALIDITY) job SDy = standard deviation of job performance Zs = average standard criterion score of Zs criterion trainees trainees N = total number of trainees enrolled total trainees C = cost per trainee cost trainee Training Costs Training Direct Indirect Development Overhead Compensation for Trainees For On the Job Training $81,000 For $81,000 50 = Ns = number of trainees who complete program 50 1 = T = duration of training benefit duration .50 = r = correlation between training criterion and job 50 performance (VALIDITY) performance 4800 = SDy = standard deviation of job performance 4800 (assume 40% of base pay . . . $12,000 * .40) (assume .80 = Zs = average standard criterion score of trainees 100 = N = total number of trainees enrolled 150 = C = cost per trainee [(Ns)*(T)*(r)*(SDy)*(Zs)]-[(N)*(C)] (50 * 1 * .50 * 4800 * .8) - (100 * 150) Experimental Designs Experimental Experimental Designs Experimental Choices Pretest/posttest Control Groups Experimental Designs Experimental 1: 1 group, posttest only 2: 1 group, pretest/posttest 3: Pretest/posttest control group 4: Solomon four-group 4: 5: Time-series 6: Nonequivalent control group Experimental Designs Experimental Internal External Validity Experimental Designs Experimental Threats to Internal Validity Threats History Interactions Maturation Diffusion/imitation Diffusion/imitation Testing of treatments of Compensatory Instrumentation Compensatory equalization of Regression toward Regression treatments treatments the mean the Rivalry/desirability Rivalry/desirability Differential Differential of treatments of selection selection Demoralization Experimental Experimental mortality mortality Experimental Designs Experimental Threats to External Validity Threats Reactive effect of pretesting Reactive Interaction of selection & Interaction treatment treatment Reactive effects of Reactive experimental settings experimental Multiple-treatment Multiple-treatment interference interference Issues in Training Validity Issues Training validity Transfer validity Intra-organizational validity Inter-organizational validity ...
View Full Document

This note was uploaded on 12/15/2011 for the course HRM 443 taught by Professor Staff during the Fall '10 term at Wisc Platteville.

Ask a homework question - tutors are online