Apr 10, 2011

[reflection] Corporate Learning Impact - Part I - Kirkpatrick and friends and enemies

I've been meaning to write up another reflection for a while now, this time on the impact of corporate learning initiatives. Today I finally start :-). It will be a three part article, as there is much to reflect upon and nobody reads a novel in their RSS feeds. Here is the plan:

  • Part I - Kirkpatrick and friends and enemies : The current pragmatic 'state of the art' is to apply half of the Kirkpatrick model. Trends and insights within the 50 years that followed the initial model suggest we might rethink evaluation of learning.
  • Part II - Divergence: so much to potentially measure : In the next post of this reflection, we'll take a hike along various approaches and models for tracking or proving impact of learning and have a short thought on each of them.
  • Part IIIa - Convergence: a suggestion for the pragmatic and one for the revolutionary :  I'll make a suggestion for 2 rough and draft models. One for the pragmatic that tweaks the dominant Kirkpatrick model, 
  • Part IIIb - ... and one for the revolutionary that throws it away and starts allover working backwards, holistic and adaptive. 
An appropriate quote to get us started:
In my experience way more energy goes into discussing evaluation than doing it. (Tom Gram)



0- Hold on, why do we measure again?

Maybe it's not a bad idea to first think about why we so eagerly want to measure learning impact. In my job role (business development) it doesn't come as a surprise that I need the figures to justify an investment decision. Learning measurements are used to decide to go for a particular training program or not, or to select one training format over another, or to continue the program or not ,or to tweak it or not. Logically, any numbers used for the justification on the investment can post fact serve to evaluate the investment, not only financially but also on the non-financial targets. Other reasons include getting the info to award credentials, or track compliance.  You can find a nice list based on a 2007 elearning Guild report on page 5 of Will Thalheimer's article on Evaluating E-learning 2.0. We track and report also to support the learners, by telling with facts how they are doing, suggesting remediation or follow up, personalizing the experience, etc. We do the same to support other stakeholders in the educational or corporate world: we report towards upper management or government on key indicators, and we spit out data like Net Satisfaction Scores on how well coaches and trainers are doing their job. We also measure because of religion. The religion is 'we can't manage what we can't measure'. In corporate circles, we want to know how anything in the company contributes the the bottom line, and learning is no exception on that.

1- Kirkpatrick and friends and enemies

1.1 - The dominant model

The evaluation model that dominates the corporate training world is Kirkpatrick's 4 levels of evaluation. The model is over 50 years out, very simple to understand and has for good or for bad stood the test of time. Its inventor Donald Kirkpatrick put a consultancy firm around the outcome of his 1954 PhD dissertation and has his last public speaking engagement scheduled in May. (Congratulations!) To understand the model, a simple drawing is sufficient, so I urge you to look it up on Google Image search. Over the years, people made some tweaks and added a layer or two. Some companies customized the terminology as well, but Kirkpatrick's model or one of its variations is commonplace and part of the common vocabulary spoken by learning professionals.  

It all started with this:

Jack Phillips suggested to add a ROI level (oh, he has a consulting firm too). 


And in an update of the original model we work backwards as a planning tool from results and regard the third level as the essential one to work with.


Sometimes, tracking attendance figures is considered level 0.

1.2 - The issues with and critiques on the model

There are a few pragmatic issues with the model. The lower levels are pretty standard and easy and cheap to measure. A happy sheet and a knowledge check quiz should suffice. But the higher we get, the more varied measuring gets as various training programs for various audiences will have different behavior goals and end results in mind. For sales it is easy: ultimately it is about sales figures. For other audiences it might be more difficult to get to a tangible and reliable measurement. When I speak to learning professionals, they mostly tell me that the first two levels get measured, and the upper two levels (the ones that mostly matter for a business) are only occasionally touched upon and only for the most important programs like sales or leadership training. There are a couple of explanations for that:
  • It's more complex : Measuring behavior and business results is a magnitude more in complexity than tracking the operational numbers on satisfaction and knowledge retention. Very often learning serves multiple business goals, that are indirect. How do you realistically separate those? Who helps out the training folks mapping the pathway to gold? 
  • It's not owned properly: As long as the divide between the training department silo and the business silo exists, the ownership of the level 3/4 measurement poses a problem. As Jay puts it, the training department does not own the business measurements yardstick to claim its impact, and the owning business doesn't feel like stepping in, or does not measure these itself.
  • It costs money: I recall talking to a learning intelligence leader. He told me his business sponsors are always very interested in metrics that relate to value for the business, but none is ever willing to pay for the measurement to actually happen end to end. I'm so amazed by all the corporate initiatives to cut down on operational spending for learning, without first spending some time and effort figuring out the higher level business impact. 
Next to the operational difficulties applying the model, there are some fundamental critiques on it as well. If you missed the latest discussions on blogosphere initiated by Dan Pontefact's article, then this article by Tom Gram is a good summary. You are welcome to go back in time with this article by Clive Shepherd as well or the more outspoken buddy Donald Clark.

In the 50 something years that followed the initial publication of the model, the learning profession came to realize that formal training events are just a piece of the puzzle. The model has never been updated for those new trends, technology advancements or insights. By today's standards, it leaves out important stuff. So for example, what about:
  • ... learning by (reflecting on) experience : doesn't that have any value?
  • ... connectivism and how learning flows trough connections in your social network : do we ignore that?
  • ... social learning or informal learning or however you want to call it : not the training department's problem?
  • ... alignment with business and what ultimately matters : maybe we should have the courage to say learning doesn't HAVE any value, it GETS value trough context and application.
  • ... the technical advancements in analytics and tracking that may have changed the measuring game beyond surveys to mining statistical facts.
  • ... what if goals are not prescribed by the experts, but suggested by peers or set by the competent professional, how do you go about supporting that?
  • ... what if the network age is to agile and volatile to think in terms of value chains : should and can we measure the ecosystem and culture where it all takes place in a chaotic and complex adaptive system?
  • (add your own trends and insights here...)

1.3 - ROI, the Valhalla of learning evaluation

And then there is the measurement of measurements: Return on Investment. The idea is that all business functions need to quantify in terms of Return-on-Investment. I'm not convinced. For me, any proof is good if it is valid. But how valid is a ROI on a learning program when you know that ROI is essentially a number you get by dividing one number by another number and that:
  • Intangible: Learning inputs (costs) are usually very tangible and easy to convert in a number, while learning outputs are mainly intangible and hard to convert in a sensible number.
  • Indirect: Learning doesn't have direct value, only indirect. Learning GETS value in the context and by performing the valuable services we were trained for or conducting the valuable behavior we learned.
  • Multiple goals: We are contributing to multiple goals with learning, so how do you single those out or add them up in a valid way? A training intervention will have one or more learning goals in mind like behavior or better key performance indicators. But it also might serve retention to give your employees development opportunities. Going to conferences is as much about learning as it is a reward (why else do they hardly organize them in let's say Albania?). There are reputation and credibility outcomes as well. 
  • Time scale: a ROI number is set within a certain time scale and preferably one within the year or quarter. By very nature learning has a longer term return and a ROI number doesn't always do that justice.
It's not that I'm against ROI per se. But with the above in mind, how valid is any ROI number on learning really? If it can be done in a way that makes sense, do it. But I won't blindly stare at ROI if there are other measurements that say more or the same in a simpler or cheaper to measure way.

The most sensible ROI numbers I see in our learning field is when we do comparisons between learning formats. The ROI of an investment in classroom training versus a virtual class training is ....  In comparison, those figures are useful for decision taking because they make relative sense. But they hardly make absolute sense as having a dollar value for the impact of learning. Maybe that makes it a false number.

1.4 - In a nutshell

The dominant model in corporate learning measurement is the 4 level Kirkpatrick evaluation model, or a variation on it. Half of the model usually gets done because we can, the upper two levels mostly remain on the to do list. That is actually the half model that really counts. Besides the pragmatic issues of time, money and ownership to apply the whole model, the model itself is a child of a time when we thought training was learning and learning was a formal event. Since then learning professionals have recognized the other 80% or learning (informal, social). What if learning is also in the flow and process and connections rather than in a piece of content or event? What with learning by reflecting on experience and by doing? What if our aim is to build and model the learning ecosystem (high impact culture, workscape, ...) rather than training programs?

No comments:

Post a Comment