click below
click below
Normal Size Small Size show me how
MHR405
chater 6
Question | Answer |
---|---|
Training effectiveness | the benefits that the company and the trainees receive from training. |
Training outcomes or criteria | - measures that the trainer and the company use to evaluate training programs. |
Pilot program / pilot testing | running the intended training with a small group before running the main group. Lets you modify the program and assess benefits. |
Training evaluation | - the process of collecting the outcomes needed to determine if training is effective. |
Evaluation design | collection of information, including whom, what, when, and how, for determining the effectiveness of the training program. |
Why Evaluate Training? | Must measure both process and outcome of training to be able to tell if training impacts behavior, and if behavior impacts business objectives. Training, Behavior, Business Objectives |
Two types of evaluation: Formative | – Purpose is to provide information about how to make the program better. Takes place during the training design process (includes pilot testing) “Participants liked the short videos, had trouble building the tower, room was too cold.” |
Two types of evaluation: Summative | Purpose is to determine what has changed as a result of the training in terms of participant behavior and financial impact. |
Two types of evaluation: Summative | Takes place at the end of the program. “Two months post-training, 75% of participants reported at least 10% fewer customer service escalations; repeat sales rose 20%.” |
Do formative evaluation to find out: | The program’s strengths and weaknesses. Whether content, organization, and administration of the program contribute to learning and the use of training content on the job. Which trainees benefited most or least from the program. |
Do summative evaluation to: | Gather data to assist in marketing training programs Determine the financial benefits and costs of the program Compare the costs and benefits of: different training programs training vs. non-training investments |
Types of evaluation designs : Pretest/Posttest : | Test before and after; compare results |
Types of evaluation designs : Comparison group : | Add second group of employees who participate in the evaluation study but do not attend the training program. |
Types of evaluation designs: Time series : | training outcomes are collected at periodic intervals both before and after training, letting you analyze the stability of outcomes |
Types of evaluation designs: Reversal: | time period in which participants no longer receive the training intervention. |
Why use different designs? Pretest/Posttest | shows training enhanced skills |
Why use different designs? Comparison group - | shows training was key element in change in outcomes |
Why use different designs? Time series - | shows how well people keep it up over time |
Why use different designs? Reversal | - see if results change when training is taken away. |
Threats to validity | = Factors that will lead an evaluator to question the validity of the evaluation. Internal validity – Did we measure accurately? External validity – Can we generalize this to other situations and groups of trainees? |
Threats to Validity: Internal: | History (outside events) Maturation (people learn on their own) Mortality (everyone drops out) Initial group differences (all the stars were in the comp group) Testing/instrumentation (your survey was bad) Regression to the mean (statistical problems |
Threats to Validity: External: | Reaction to pretest (test tips trainees to desired outcomes) Reaction to evaluation (test environment makes people try harder) |
Threats to Validity: External: | Interaction of selection and training (program works better for some types of people than for others) Interaction of methods (can only generalize to similar methods) |
2 ways to control threats to validity: 1. | Use pretest/posttest design |
2 ways to control threats to validity:: 2 | Random assignment Use computer-generated random numbers to randomly assign employees to either the training group or comparison group. |
2 ways to control threats to validity:: 2 | Makes it more likely that differences between training group and control group are due to training, not to personal characteristics |
Kilpatrick’s framework is a hierarchy! | Therefore we assume that… Changes at a higher level (3-4) are more beneficial than changes at a lower level. Higher level outcomes should not be measured unless positive changes occur in lower level outcomes (1-2). |
Criticisms of the Kirkpatrick model | Research: Not much support for hierarchy No causation between levels, levels aren’t different in importance |
ROI | Dollar amount of return expected from each dollar invested in training A form of summative evaluation |
Cost-benefit analysis | - process of determining the economic benefits of a training program using accounting methods that look at training costs and benefits. Doing this type of analysis is expensive, so reserve it for costly or high-impact programs |
Steps to determine ROI | Identify outcomes Place a value on the outcomes Determine the change in performance (after eliminating other potential influences on results) Calculate benefits |
Steps to determine ROI | Determine training costs (direct & indirect) Subtract costs from benefits Divide benefits by costs to get ROI |
How to determine benefits | Compare financial results after training to results before training. Also: Scale up metrics from pilot program |
How to determine benefits | Observe financial impact of top job performers vs. bottom performers – what would be financial benefit of having more top performers? Check the technical literature for information from previous studies. |
Another ROI method: Utility analysis | a cost-benefit analysis method that involves assessing the dollar value of training based on: |
Another ROI method: Utility analysis | estimates of the difference in job performance between trained and untrained employees. the number of individuals trained. |
Another ROI method: Utility analysis | the length of time a training program is expected to influence performance. the variability in job performance in the untrained group of employees. |
Training Metrics (things to track) | Expenditure per employee Learning hours received per employee Expenditure as % of payroll Expenditure as % of revenue Cost per learning hour received |
Training Metrics (things to track) | % of expenditures for external services Learning hours received per T & D staffer Avg % of learning content (basic skills, customer service, leadership development) Avg % of learning hours via different delivery methods |