This was supposed to become the third an final part in my reflection on the impact of learning in the corporate sector. But I'm not going to make it today, so this is only Part III a - the model for the pragmatic. The model for the revolutionary will follow in IIIb. These last parts are all about convergence and suggest selected learning impact metrics for what really matters today, picked from the many, many items to potentially measure.
- Part I - Kirkpatrick and friends and enemies : The current pragmatic 'state of the art' is to apply half of the Kirkpatrick model. Trends and insights within the 50 years that followed the initial model suggest we might rethink evaluation of learning.
- Part II - Divergence: so much to potentially measure : In this post, we'll take a hike along various approaches and models for tracking or proving impact of learning and have a short thought on each of them.
- Part IIIa - Convergence: a suggestion for the pragmatic : I'll sketch out 2 models based on the findings on part II and the desire to do better than the current state of the art. One for the pragmatic that tweaks the dominant Kirkpatrick model,
- Part IIIb - Convergence : a suggestion for the revolutionary : and one for the revolutionary that throws it away and starts allover working backwards, holistic and adaptive.
Let us kick off once more with a quote:
Performance is everything. Forget everything else. (Harold Geneen.)
3. Convergence: suggestions for the pragmatic and for the revolutionary
3.1. The end goal is clear and predetermined
In the previous part, we went over a lot of models. Turns out that the model you tend to use for measuring and proving the impact of corporate learning is for a large part embedded in the 'world view' you hold on learning. Learning can be viewed as an event, as a process, as a network flow, etc. My world view is that of an 'ecosystem'. It is the environment/culture/workscape where the self-reliant knowledge workers of the network age (the HoCos) thrive in. At the one hand knowledge professionals are masters of their own competence, self-steering and self-reliant. At the other hand they need an ecosystem that ensures as much as possible the right learning is available, as well as competent peers, candid and immediate feedback, access to experience, trends and directions, etc. An optimal ecosystem provides direction and vision, access to content but also to peers and experts, the chance to put into practise, tolerence for trial and failure, etc. For me the learning of the future is making sure the ecosystem stays optimal rather than organising learning interventions per sé.
If I was writing on impact of learning in education or society, this 'world view' thing would become a major discussion item at this point, and probably take the entire impact conversation hostage. But these reflections are about learning impact in a corporate setting, and that setting makes the whole learning 'world view' a secondary matter at best. Corporations by their very nature have a pre-determined ultimate goal : value (by which we mostly mean money). You may like that, you may contest that, you may dance on your head.... fact of life. So that is settled then: in the end learning -like all other business functions- goes for value. More than that, there is over the various models we have discussed in part II a clear consensus this value is created or maintained through performance. Obviously not any performance will create value, so we'll have to assume business strategy clearly defines the valuable performance a corporation goes after. The only thing left to do is get learning into the performance-leads-to-value picture, regardless of world view (and you know mine now.)
3.2. Value : the ultimate aim
Let's pauze here and reflect on the ultimate aim in any corporation : value. I'm deliberately not saying 'money' because I don't think that is entirely correct in the network age. Tangible value is not the only value worth caring about anymore. Fortune 500 companies have intangibles in their stock market value, some sources say even up to 2/3. In the network age, not only the money and assets of a company matter, but it's intangible assets such as brand image and reputation have become serious business too. Brands even get a dollar value estimated every year. This year, Apple leads the pack.
“Intangible capital doesn't appear on company balance sheets, but it accounts for one-third to one-half the stock-market value of the Fortune 500 -- i.e., what's left after subtracting buildings, machines, resource rights and other elements of tangible capital. One particularly important form of intangible capital is brand equity.” (grantmarketing)
- There is tangible and intangible value. Tangible is mostly about money. Intangible is mostly about reputation. There are other (in)tangibles too. The point is that intangibles do matter in the network age.
- For each we need to consider if learning positively impacts that value by creating more, or by avoiding it going down. I think we can put compliance training collectively in the category cost avoidance (lawsuit!) and damage control (brand image). The value of sales training would typically be the incremental income generation with as compared to without this training.
- And then there are the short term versus long term effects to consider. Usually learning has (also) long term effects.
3.3. Selection criteria for metrics
There is so much to potentially measure to steer learning and keep track of it's impact. In the next pieces, I'll be making a selection of only a few that stand out most (in my world view on learning). So what are the selection criteria for corporate learning metrics?
- Actionable : What is the point of storing survey answers, a stream of numbers and nice report graphs if you are not going to use them for action? For every metric, we should define potential actions up front.
- Not just for 'us' : learning analytics should not give insights just for the good of the learning professional and the training department. We have gotten very good at metrics ABOUT learners, but let us find the metrics that also give insights FOR learners. After all, they are the self-reliant knowledge workers of the network age.
- If you can get the 'real data' instead of surveys, you should. I'm amazed so many learning metrics are solely based on surveys. Surveys will reveal what people think is true, unless it is not true. People can be inconsistent within surveys. There can a severe bias (eg in classroom evaluation, all other so called 'independent' survey questions get biased by the perception of how good the trainer was, except for food, you can never do splendid on that :-) ) . And as Dr House puts it : everybody lies. (Not necessarily because we mean to). Why would people bash a conference if they want to go to another one for example? They know it in marketing too: don't believe everything surveys tell you. As a professor of mine used to tell: if you ask people in the supermarket if it would be a good promotion to get an umbrella with a carton of milk you'll get a 'yes'. Doesn't mean you should do that. Today much more than ever we have direct access to the hard facts. That means we do not need to bother people with all those surveys anymore.
- Pragmatically feasible over conceptually pure. There's a trade-off to make between being conceptually pure and pragmatically feasible. The concept of the Kirkpatrick level 3 and 4 might be conceptually accepted by most learning folks, but practical barriers such as measurement costs, fuziness and lack of ownership stand in the way and they just don't get done enough. We learned the hard way that if it cost too much, is too complex (so automate where possible!) or is not anyone's responsibility it doesn't get done.
3.4. Extreme convergence : self-efficacy and KPI
When looking at all potential metrics, there are really only 2 kinds. We can narrow the areas down to 'operational metrics' and 'business metrics'. The operational metrics are usually owned by the training department and tell things like Net Satisfaction Score (NSI) for training classes, number of hours of training per employee, pre and post quiz scores, etc. The business metrics are usually owned by business lines and include performance statistics like average call time for help desk agents, sales figures for sales people, etc. While the business metrics are the ones linked closely with value, there is nothing wrong for a business function to manage itself by operational metrics. On the contrary, I would be worried if no such system was in place. All business functions have operational metrics to track, and training should be no exception. But it should obviously not end there. Most learning metrics that get done are operational.
For our extreme convergence exercise, I'm going to pick just one metric in each area. For the operational metric (how is the learning department doing?), the key metric in my humble opinion is self-efficacy. It is the belief a person has he or she will be capable to perform. Self-efficacy is a logic measurement in the 'world view' of self-reliant, self-steering knowledge professionals owning their own competencies development. Learning interventions should obviously contribute to increasing the self-efficacy on a certain competency, above the required threshold to perform. Self-efficacy also signals to the business how many people tag themselves as capable. I don't know any other way to assess self-efficacy than via a survey, and the following questions might do (see question 2 and 3 on the walkthe.net final step):
Obviously self-efficacy is not where it ends, the proof of the pudding is in valuable performance. So for the one and only metric on the business side, I'm opting for Key Performance Indicators (KPI). I mean it in the sense of the word: indicators on performance that are identied to matter for value and thus are 'key'. A lot of dashboards and measurement happens around KPIs already in companies, albeit that some of those do not qualivy for the term KPI as I see it here. I'm talking about indicators that have a proven link to value, and not operational statistics. Orientating people and their competencies towards KPIs aligns learning with the indicators that matter directly, and they are concrete for people to understand and work with. Linking directly to value creation or cost avoidance is far too general to steer behavior, whereas KPIs give us concrete guidance on performance that matters and leads to value. To get a feel on popular KPIs in today's business, have a look at the list on this human resources site.
The extreme convergence in self-efficacy and KPIs is embedded in the organisational structure of most corporations today where you have 'the training department' and 'the business lines' that make use of its services. It is like the training department says 'we'll assure they learned the stuff, you make sure they get to apply it'. It's an artificial divide, and I'm not so sure if that is still a good one. It is a model that worked so-so in stable times and the golden age of corporate training, but I don't see it working at all in the agile network time. One of the differences in the suggested model for the pragmatic versus that for the revolutionary is going to be that for the first one we keep the assumption of a divide between 'learning land' and the business line. But really structuring our corporations this way makes we neglect the essential 'apply' by system design.
You can also view the self-efficacy as the short term measurement and the KPI as a longer term impact.
3.5. A suggested model for the pragmatic: 'Bert'patrick
So here is my suggestion for the pragmatic amongst us. With pragmatic I mean that we accept there is a divide between the operational side and the business side , and we also accept we'll not going to get away from the Kirkpatrick model as it is a well accepted and dominant model. So here is what we will do: we'll keep the levels (inclusive of the level 0 of attendance and level 5 of ROI), but rename them and pick a key metric for each level that is more aligned with the world view of the 'ecosystem'. And just because I'm such an unbelievably funny guy let's give it the working title 'Bertpatrick model'.
Level | It is mostly about... |
Level 5 “We contributed” Before: ROI | Money and reputation |
Level 4 “We mattered” Before: Results | Key Performance Indicators |
Level 3 “We did it” Before: Application | Time-to-performance |
Level 2 “We can do it” Before: Learning | Self-efficacy |
Level 1 “We recommend it” Before: Reaction | Social recommendation |
Level 0 “We leave traces” Before: Attendance | Analytics |
Some notes:
- The focus is not the individual, but 'we', as in the project team or profession. Learning is not a lonely activity, and working certainly is not. Hardly anything gets done anymore just by yourself.
- It's not just for events, but stretches into all kinds of competence building activities.
- Levels 0-2 correspond with the 'operational' area in the previous section, and is usually where the training department's responsibility and ownership stop. Levels 3-5 are the business side. The most important metrics remain self-efficacy and KPI.
3.5.1. Level 5 : We contributed - Value
The ultimate aim is value. As explained before that value is tangigle or intangible, and we can contribute to value creation and/or cost avoidance in short or long term. Earlier in this article we picked the top two valuables for corporations today: money (assets) and reputation (brand). To ensure maximum value, KPIs should be selected that maximize impact and steer value creation or cost avoidance. We can either do that by
selecting the Key Performance Indicators that research and consultancy firms have come up with for
our particular industry (but that will not make us stand out), or we can go a step further and mine for those unique KPIs that match our environment and customer base. As an example, I blogged before on how Google used mining techniques to come up with a list of differentiating leadership behaviors for its own corporate ecosystem.
It is crucial the business makes the right strategic selection of KPIs as that is where the whole system will work towards, and it is crucial these KPIs adapt with the agile business environment to remain 'key' and the true drivers of value.
Actions
Sometimes going directly to value isn't that hard, but mostly it will go through the KPIs. Here are some obvious cases:
- Substituting certain learning activities that work (self-efficacy and KPI contribution) with cheaper ones that attain the same levels or even better goes into the 'cost avoidance' bucket and is fairly easy to calculate. It's even easy an meaningful to do the old ROI on this.
- The value of certificates and compliance that would otherwise prevent people access to performance (you need a certificate before you are allowed to do it) is the difference between the money you get in if someone can perform versus what they cost you when they cannot.
What's in it for the employee
There are not so much benefits for the employee directly as value is way too vague and too general. KPIs are a better guidance.
For the organisation
For the organisation obviously, the end game is value. We've covered that above.
3.5.2. Level 4 : We mattered - Key Performance Indicators
What and how
As stated above, KPIs or Key Performance Indicators are the most important metrics on the business side of the model. KPIs are already what people get measured on. KPIs are clear and specific enough to guide people to what really matters in what they do daily, whereas general terms as 'generate value' are way too vague in steering behavior and required learning. KPIs is also where the proof of the learning is: it is in the performance. To get back to the opening quote on this article: it is all about performance.
The beauty of working towards KPIs is that they are already measured by the business line, they are not new measurements and are already what counts and what gets reported on in the dashboards. That makes it easier and logical to link learning and other competence building activities to the KPIs and by doing so into the 'normal' business metrics.
Techniques to use here are statistical tools to single out the relative contribution of learning activities towards the KPIs. You can have a sample of people who did versus people who did not and see if that made any statistical difference. An example quoted in part II of the article includes the chart on the IBM Sales School : 'people who followed the program vs people who didn't'. I suggest every formal learning activity also lists the KPIs it is designed to contribute to in its analysis phase, and in its description in the LMS or catalogue or tags. This way learning 'subscribes' to certain KPIs and you can embed the real life data on those KPIs directly into the learning program. Nowadays most role and skills descriptions in the talent management systems do have behavioral characteristics linked to them, so it is a small step to include specific KPIs to them and link your formal learning to those goals.
Actions
- Single out the stuff that contributes to KPIs, and recommend that to those who didn't.
- A decreasing overall KPI will trigger the right learning interactions as they are 'subscribed' to a certain KPI.
For the employee
- It makes very clear what to work towards to and what you can do to 'subscribe' to a certain KPI. Obviously you want to have the KPI data to keep track over time for yourself and see if you might need to keep up as competencies and your ability to perform do fluctuate over time.
For the organisation
- Linking learning with KPIs provides a direct link to what matters for the business. It gets learning into the normal business dashboards and discussions, where it belongs.
3.5.3. Level 3 : We did it - Time2Performance
What and how
In agile business evironments, time to application is a cricital measurement, and the single most important one in the 'application' level of this framework. Some people measure 'time-2-competency' aka 'time-2-proficiency' but that isn't really cutting it. Time-2-Competency tells us the time we need to get people ready and is the comfortable measurement inside the responsibilities of the training service. As the article 'the other side of learning' suggests: "But on the whole, we have been negligent in addressing the most critical moment in any person’s individual learning process – their moment of Apply." So who cares about just being ready? What counts is the time it takes to get someone to actually perform adequately and independent for the first time. That is the time-2-performance and a responsibility equally placed in the baset of the business line. What good is it for people to get prepped in x weeks and all eager to go out in the field only to have them in a frustrating waiting status for x more weeks it takes the business to assign a spot for them? Making it a shared measurement hopefully forces the two to align better.
Time-2-performance is easy to measure: it is the time between when you start the competence
development journey, and the time you first perform without steering or supervision in a satisfactory way.
Time-2-Performance lends itself more to the type of training to get you ready to perform, but in an agile environment there is always new stuff to bring into your work so the same metric is important for the new insights and latest developments of any competence domain.
Actions
- When time-2-performance is not as anticipated, get into the details and fix it by aligning business forecasts and open seats with learning better, by selecting and personalising the learning journey better, ....
- Time-2-Performance also directly links into value : how many days did we gain in getting people ready?
For the employee
- Focus on time-2-performance instead of time-2-competency enhances your chances of getting to apply before you've forgotten. (see forgetting curve)
For the organisation
- Time-2-Performance is a strategic differentiator vis-a-vis your competition: can you get people up to performance faster? It also makes the interlink between 'the training' and 'the business' more aligned.
3.5.4. Level 2 : We can do it - Self-Efficacy
What and how
This level and the lower ones are traditionally 'owned' by the training service. As indicated above, the single most important indicator here is the self-efficacy rating. Do people deem themselves capable of performing? Are they competent and confident? Self-efficacy is traditionally measured by asking people the two questions shown above. You can expand on that and involve other people's assessment of someone's efficacy, like involving peers, hierarchy or experts. Let's call that 'peer-efficacy'. What we want to track is how certain
learning or activities influence the self-efficacy / peer-efficacy scores.
Actions
- Track how learning and other activities influence self-efficacy, and use that information to detect the best ones to spread out.
- Use the individual self-efficacy scores to match people with projects or open jobs.
- Use the aggregated self-efficacy measurements for workforce management and identify gaps or untapped potential.
- Keep track of how trustworthy an individual's self-assessment is over time, and correct for that. You can combine the analytics of the 0 level with the self-assessment to check if that makes sense.
For the employee
You get to signal how confident you are you can do it. Ultimately, all learning is there so you are able to do what you're good at. It also empowers you a great deal, as you are the assessor of your own learning.
For the organisation
First of all self-efficacy scores give us an indication per individual who feels ready for performance, and who doesn't. It is a signal people send out 'I'm ready' that you can pick upon when it comes to assigning work. It also allows for steering people to activities that will augment their self-efficacy. It also gives a workforce wide view on capability on specific skills and competencies that allows for workforce management in making sure you have the necessary levels of ready skills accross your organisation.
3.5.5. Level 1 : We recommend it - Learning goes viral
What and how
The one questions it all boils down to in marketing is 'would you recommend this to others?'. Likewise, we can simplify the whole satisfaction area for learning interactions from a 57 smiley sheet questionnaire to 'would you recommend this'. There are already learning organisations out there that track the 'Net Promotor Score' on their learning programs. The updated version of satisfaction is actually close to the famous 'I Like' facebook button.
The beauty of the 'I Like' button is that is it not really just about liking. The button actually says 'I recommend' because the simple fact of clicking it gets signaled throughout your followers, peers and general network. It also gets into algorithms that make automated recommendations for 'serendipitous' learning. You could actually go one step further and ask people WHO they would recommended this to, and notify those people. It makes the learning viral, and probably more in line with needs as who better than peers know what you are up to or would need? The technology is there, and this whole level can be automated.
Actions
- Let recommendations flow, and perhaps keep a list of recommendations and provide reminders on them.
- Compare the number of recommendations with your expected baseline and tweak, close down or market the recommended activity. Make a 'top' list that makes the most recommened ones spread even further.
- If recommendations (and therefore also satisfaction, because who would recommend stuff they are not satisfied about?) are lacking, send in a 'squad' team to investigate the root cause of the low traction. This is the time to phone up people and ask them about the underlying elements such as 'was the food not good', 'was it not relevant', 'were the colors OK?', etc. There is no need to bother people with details on satisfaction/recommendation on every aspect unless when a flag is raised and this 'squad team' investigation is carried out through post-fact sampling.
For the employee
- You get to share in the most simple and unintrusive way what you recommend and keep track of that for yourself, but also for your larger network. Recommending useful stuff to your peers and followers is a sharing activity that shows you know think of them and care about their work. Likewise, you benefit from what others recommend for you. Recommendations from trusted sources mean so much more than the mandatory training list from the training department, or the curriculum created for the average learner.
For the organisation
- The viral flow of recommendations is much more powerful in directing competence building activities to people with a need or interest, at the right time, adapted to their preferences and preskills. People are more likely to go to sources recommended by trusted peers than the fixed curriculum or catalogue list of the LMS. When done right, it can also save the organisation a lot of money that now goes to assuring people get the right learning at the right time, etc.
3.5.6. Level 0 : We leave traces - Analytics
What and how
Everything we do or don't do leaves digital traces these days. So today's equivalent of 'attendance' becomes analytics on all these digital breadcrumbs we leave behind us. These traces include for example the eyeballs on web pages visited and links clicked within those pages; the tags, ratings and recommendations we make; the frequency and duration of our communications with our contacts; the terms we search for; the events we enrol in; the content we create and who uses that; and so much more.
Luckily, there are excellent tools out there for web analytics. A popular free product is Google Analytics, a product we use at our company is Unica. There are increasingly tools that trace and map out social networks too and it won't be before long they get included inside social networks. (Below an image from LinkedInLabs.com )
Actions
The actions identified from analytics include:
- Based on the popularity of items : keep them, promote them, tweak them, stop using them (eg for a quick win: are there any learning sources you pay for that are hardly used or vice versa?). You
can set thresholds on levels that automatically trigger alerts and your attention.
- Buzz : Detect what is hot and what is not, and adapt to that and spread that out. Anything to piggyback on or any gaps? For this buzz detection, also unleash analytics tools on the comments, tweets and blog posts concerned. Tap into the flow of stories people tell on the subject.
- Dive in and check your assumptions with the facts. For example are we reaching the audience we had in mind? What are characteristics and behaviors of our learners? What are the sources people come in finding us and how long do they stay?
- Do research and mining on the data for 'surprises' and adapt to those. If you have a large, interesting data set I can even imagine researchers doing that for you. As an example below from mining the digital traces of a massive online game, the graph shows solo versus social players and gives interestings insights to act upon.
For the employee
The competent knowledge worker gets an accurate view on the 'quantified self', which is like a mirror of how he or she goes about learning, working and growing. But there's more: when allowed you also get to see the digital traces of your peers and the experts that inspire you and follow in their traces.
For the organisation
The organisation gets facts on individual and crowd digital behavior to base actions on, and to detect trends and adapt accordingly.
In the next and final part of this article, we'll get revolutionary and start from scratch, ignoring the dominance of Kirkpatrick and training organisations and rituals.
No comments:
Post a Comment