Despite years of effort, the challenge of linking learning and development (L&D) to tangible organizational success remains a tough nut to crack. ATD surveys over a 10-year period show no movement in the number of L&D departments that evaluate training at Kirkpatrick or Phillips Levels Three or higher.
Level Three (Training Transfer) is measured by just over 50% of organizations. Level Four (Organizational Impact) just crosses the one-third threshold and Level Five (ROI) is only attempted by 15% to 20% of L&D departments.
So, the question is why? If we all agree that it’s important, if we feel we need to make the case for L&D to the C-suite, why don’t more learning leaders do it? When surveyed, most learning leaders cite lack of time and budget.
Here’s a different theory: Time and budget may be factors, but the most important factor is that it’s not entirely clear how to do it. We know how to create Level One learner surveys, we know how to create Level Two assessments of learning (even if much of what we do at these two levels does not clear the validity bar), but there is no generally agreed upon methodology for measuring at Level Four.
When conducting a Level Four evaluation, one methodology is the Brinkerhoff Success Case Method (SCM).
Robert Brinkerhoff, Ed.D., was a professor (now retired) at Western Michigan University, specializing in program evaluation. More than 25 years ago, he was asked by a well-known supplier of leadership development programs to evaluate the impact of their courses across several corporations.
What Brinkerhoff found was initially puzzling. He discovered that the same curriculum had different outcomes in different organizations. He even found that different business units within the same corporation had different results.
Anyone who has studied the scientific method knows that if you hold a single variable constant (in this case the training) but get different outcomes, that variable cannot be the cause of the different outcomes. It must be another factor (or factors).
Digging further, Brinkerhoff looked at how the trainees applied what they had learned. Here’s what he found:
Over hundreds of studies, he discovered that only 15% to 20% of training efforts had any measurable organizational impact. According to Roger Chevalier’s A Manager’s Guide to Improving Workplace Performance, other studies have shown that about 75% to 85% of organizational performance issues are due to environmental factors (beyond the individual employee and their training).
If you think about it, these are astounding numbers. $100B per year is a generally accepted number for how much U.S. corporations spend on training. If only 20% of that expenditure results in a positive impact, what does that say about the other $80B? How much money is being wasted, not necessarily by “bad” training, but by failing to provide an environment in which the training can be advantageously applied?
Based on his and others’ research, Brinkerhoff surmised that successful training was a process, not an event and that the issue was performance, not training per se. He then created the Success Case Methodology to answer two simple questions: When training works, why does it work? And when it fails, why does it fail?
He knew from his research that a single training event with multiple learners, on average, will produce individual successes (15%-20%), individual failures (15%-20%) and everyone else somewhere in the middle. So, his methodology rests on identifying those handful of successes and failures and isolating the factors that lead to success or failure.
Success and failure can be measured by either hard data, such as market share (for sales training) or workplace accidents (for safety training), or soft data, like employee satisfaction (for leadership development) or customer satisfaction (for sales training).
A slightly modified version of the SCM consists of nine steps:
The SCM has many benefits:
Benefit 4 is particularly important. Unlike other evaluation methodologies, it doesn’t just produce a number, it produces actionable data that can be used to improve the training and/or the performance environment.
Though lack of time and budget might be the most often cited excuses for not conducting impact studies, consider the possibility that it’s also a lack of a clear methodology. There is a practical solution.
Steven Just Ed.D., is CEO and principal consultant at Princeton Metrics. Email Steven at sjust@princetonmetrics.com or connect through LinkedIn at linkedin.com/in/steven-just-081b76.