- Part I - Kirkpatrick and friends and enemies : The current pragmatic 'state of the art' is to apply half of the Kirkpatrick model. Trends and insights within the 50 years that followed the initial model suggest we might rethink evaluation of learning.
- Part II - Divergence: so much to potentially measure : In this post, we'll take a hike along various approaches and models for tracking or proving impact of learning and have a short thought on each of them.
- Part IIIa - Convergence: a suggestion for the pragmatic and one for the revolutionary : I'll sketch out 2 models based on the findings on part II and the desire to do better than the current state of the art. One for the pragmatic that tweaks the dominant Kirkpatrick model,
- Part IIIb - and one for the revolutionary that throws it away and starts allover working backwards, holistic and adaptive.
Let us kick off with a quote (warning, this quote is like swearing in the church for some people):
Learning doesn't HAVE value. Learning GETS value. It gets value through performance and behavior, and always within context. (Source: You should say this.)2. Divergence: so much to potentially measure
Today we'll pick some cherries. We will go over various models, suggestions, frameworks and the likes because if you add it all up, there is a LOT we could potentially measure in a LOT of ways and through a LOT of lenses. For each model I'll have some brief thoughts and I hope you will add your 2 cent as well.
2.1 Kirkpatrick and add-ons
Last time, we already went over the mother of all models: the Kirkpatrick 4 levels of evaluation. In essence, this model takes an easy to understand (but difficult to fully implement) 4-step 'chain-of-evidence' from satisfaction over learning over transfer into results. It is also a model that dates from the time we thought learning was a formal event. But as Jan reminded me over lunch a few weeks ago: you cannot not learn. What if learning is continuous? What if it is also informal, in flow, embedded in work experience, changing too fast to design, unpredictable, etc? Let's have a look to expand our mind and go cherry picking models...
My 2 cents:
- We can and should do better than this model, both in terms of actually applying end-to-end, and in terms of accomodating the new insights we have on learning.
- Education would usually stop at the diploma/certificate as its final deliverable. That would be the equivalent of stopping at level 2 and is so by design of the academic system. By coincidence that is where the 'corporate university' (an in-corporation clone of the academic model) also usually stops... To me, that does not make sense in a corporate context. Certificates do not have value. They are at best an intermediate indicator.
Your 2 cents:
- Stop for a minute to reflect ...
2.2. Learning Landscape (Will Thalheimer)
I for one found this model a very interesting one, and I suggest you have a look at it even outside of the scope of learning impact. Will Thalheimer's model is well explained in this video. Below you will find one particular still out of this video. It is where he suggests all the potential measuring points in his particular model. If I'm counting right that is 16 points of measurement covering his 7 areas of the landscape.
My 2 cents:
- The model itself goes from training intervention over performance into outcomes. The outcomes are split between individual and organisational ones. It's a pattern that we'll see again.
- There is a lot we could potentially measure, more than the ol' Kirky suggests. But should we measure all? What is the criterion to yes or no put our time and money in a specific measurement?
- Add your thoughts here. What strikes you in this model? ...
2.3. Learning Effectiveness Model (IBM)
At IBM we use a method called Learning Effectiveness Model. It stretches out over the life cycle of a learning program and starts by making a predictive assessment based on the business divers.
As part of the predictive measurement, a casual chain is mapped out for the program. It lays out the casual links from knowledge/skills over behavior over individual performance over organisational performance onto performance indicators and finally the business results aimed for.
This is a sample of a particular casual chain for sales training programs.
And one of the programs that is measured and evaluated according to the LEM model is our Global Sales School. I'm showing you a slide as it comes from a presentation Mike gave to an earlier edition of Online Educa. This slide shows the improvement in baseline of sales persons that did vs those that did not go through the program.
My 2 cents:
Your 2 cents:
2.4. High Impact Culture (Bersin)
Bersin is an independent consulting firm researching the practices, processes, structures, and systems that drive the greatest business impact. They came to the conclusion that high performing learning cultures are high impact organizations. In 2010 they published their findings and model the high impact learning culture around 7 culture elements and 40 best practises. You can read the PDF summary here for free.
They also trademarked the Bersin High Impact Learning Framework below. Just in the blink of an eye you spot terms such as 'KPI', 5 measurement areas including alignment and adoption, and the split between individual and organizational performance.
My 2 cents
2.5. Complex Adaptive System (via Brandon Hall)
Let's move to another authoritative research voice in the learning community. Gary Woodhill from Brandon Hall wrote a report last February called "Understanding Learning Analytics" and a report called "The Impact of Training on Participation, Performance, and Productivity". You can read the first page for free here. He does a fine job reminding us the fundamentals of measurement, the Kirkpatrick model, ROI etc. He also refreshes basic statistical understanding. But the most striking for me was his suggestion somewhere in the 50 pages to regard learning and development as a complex adaptive system. I like the thought, but the report stops short of actually bringing any practical hints to the table on how to do that.
From the report and the book Simply Complexity: A Clear Guide to Complexity Theory by Neil Johnson, the outlines of a complex adaptive system are:
One section of the report is entitled 'what you measure depends on your theory of learning' and goes into typical 'world views' on training such as the behavioral view (performance!), cognitive view (brain functions), constructivist view (personal connections).
My 2 cents:
Your 2 cents:
I've been informed/spammed the last months on ASTD's Learning Transfer Conference.The concept of transfer has always taken a big place in learning land, as it is considered critical that learning transfers to your permanent potential to apply it. This particular conference is actually based on the book 'Six disciplines of breakthrough learning'. You can read a very interesting book review (on the first edition) on the blog of Will Thalheimer. Will lists the 20 recommendations that strike him most. In a nutshell, the book is about 6 D's. And since the underlying thought is transfer, a lot of focus goes into the afterlife of a training event and making sure transfer takes place by designing an end-to-end experience targeted at action aligned with business terms. Who can be against that? They see learning as a PROCESS, not as an event.
1. Define Outcomes in Business Terms
2. Design the Complete Experience
3. Deliver for Application
4. Drive Follow-Through
5. Deploy Active Support
6. Document Results
The authors of the book started the company Fort Hill, that sells the ResultsEngine tool. Janet Clarey's blog post gives a bit more insights into the workings of the tool : "It’s actually reminder-driven. It sends updates to participants, reminding learners of objectives, prompting reflection and action, and providing specific content. The system is designed to keep learning top of mind so follow-through can occur."
My 2 cents:
Your 2 cents:
2.7. Cognos Talent Management
Cognos is an analytics and business intelligence middleware. One of its products is the Cognos Workforce Performance tool targetted at getting to metrics to answer talent management questions. The software provides metrics and dashboards around specific talent areas and lists the typical questions in that area and the metrics to answer those questions. Below are questions and metrics on the 'development' area in the talent management suite.
My 2 cents:
Your 2 cents:
2.8. Success Case Method (Brinkerhoff)
Tom Gram suggests to use the Success Case Method over Kirkpatrick. It is a simple and fast method developed by Robert Brinkerhoff and goes like this: (as taken from Tom's blog):
My 2 cents:
Your 2 cents:
2.9. The Big Question (blogosphere)
The Big Question edition of March 2011 on assessing informal learning generated many interesting blog posts and comments. I encourage you to skim them all.
One particular post in this series from Harold Jarche and Jay Cross points out the time spaced nature of assessment (wait a bit, learning has enforced impact over time when reflected upon and applied), talks about measuring activity vs outcome, validation vs evaluation, Performance Objectives, ... Ryan2pointo comments: "For me, the whole point of learning & development initiatives is to support performance - so regardless of the mode or approach taken to learn something, the assessment of its impact ultimately rests on the performance stats (whatever they may be)". Clive doesn't get it (but actually he does).
Kasper Spiro suggests 'output learning'. To repeat his words: "In a nutshell it works like this. You encounter a problem in the workspace , then you set your learning objectives (that lead to tackling the negative effect of the problem), determine the requirements that set the boundaries for that solution and then the worker/learner is free to solve his problem anyway he wants, as long as he stays within the boundaries set by the requirements." He also lists the following techniques as 'how' to do it: log, survey, web analytics, user generated input.
Clark Quinns reflections are on his learnlets blog. A quote from that post: "Frankly, the problem with Kirkpatrick (sort of like with LMS’ and ADDIE, *drink*) is not in the concept, but in the execution." He also explicitly links to action in the form of "keep, tweak or kill" and suggests to set thresholds that trigger those actions. He also suggests that the thresholds might change over time.
My 2 cents:
Your 2 cents:
2.10. Value Network Analysis
It was also a blog (from Harold Jarche) that triggered my attention for value networks. You may be familiar with Social Network Analysis and tools that visualize people and their relationships, like the one below generated via ToughGraph. Value network analysis is similar, but it takes roles in the nodes rather than people and the links depict deliverables. These links have a direction, and are solid (tangible deliverable) or dashed (intangible deliverable).
A good, quick and basic intro to Value Network Analysis is the article by Patti Anklam who organizes workshops on the topic. Below is a sample value network from her. The mapping method is invented by Verna Allee, and you can find more information in her online (free) book "Value Networks and the True Nature of Collaboration". One particular chapter of that book goes into performance indicators that are derived from the value network analysis. You'll find a table of typical performance indicators there for inspiration.
My 2 cents:
Your 2 cents:
2.11. And more...
I'm going to stop here, but you are welcome to continue picking cherries and good ideas and angles from other models. Here is a list for further exploration, and you can add some of your own...
At this point, my mind is overloaded with models and I suspect so is yours. We have covered a lot of models, frameworks, methodologies and approaches to learning and impact/measurement of learning. Some regard learning as events, other as processes and yet others as ecosystems or social/value networks. Metrics can come via surveys or (statistical) data analysis or anecdotes. Some models feel conceptually more 'right' than others, but may be more difficult to implement. And there is really a LOT to potentially measure.
But for all the variance in suggestions, one word stands out on the above list: performance. Learning only generates impact when aligned with the overall business goal and that goal is performance. Otherwise it might as well never have happened. That links us back to the quote at the start of this article: learning does not HAVE value. It might GET it, through behavior change, through performance, within context.
So we'll end this divergence part in the series with a mind full. In the next post we'll converge again into a model for the pragmatic and one for the revolutionary.
Move on to Part III a >>
At IBM we use a method called Learning Effectiveness Model. It stretches out over the life cycle of a learning program and starts by making a predictive assessment based on the business divers.
As part of the predictive measurement, a casual chain is mapped out for the program. It lays out the casual links from knowledge/skills over behavior over individual performance over organisational performance onto performance indicators and finally the business results aimed for.
This is a sample of a particular casual chain for sales training programs.
And one of the programs that is measured and evaluated according to the LEM model is our Global Sales School. I'm showing you a slide as it comes from a presentation Mike gave to an earlier edition of Online Educa. This slide shows the improvement in baseline of sales persons that did vs those that did not go through the program.
My 2 cents:
- I find this a good example of where you evaluate a total program -instead of a single intervention- working towards what really matters. Of course, for sales training that is an easy one: it is all about sales figures and those are already carefully tracked and fully quantified. It's not always as clear cut with business outcomes of other training programs.
- Obviously, this entire approach is embedded in the world view of cause and effect. The casual chain is however mapped out for a specific program's audience and targets. My thought is how stable these casual chains are in the agile and unpredictable network age. Can you realistically always map it out and assume it will keep holding? Does it need frequent updating or continuous validation?
- The stages of the chain go from individual up to organisational performance. Does it really make sense to take individual performance into account these days? I know it is a deeply embedded world view in the western part of the world to track individual performance if alone for the performance review and bonus. But when you really think about it, close to nothing gets accomplished by an individual anymore. Why would a corporation care about anything but the performance of the project team? You can always go down from the team performance metrics and point out the low or high performers within the group for bonus reasons, but the focal point for measurement should move up to team/organisational level. That is what counts and it is NOT the sum of the individual contributions.
- I like the statistical approach with the 'control group' that did not get the training. This really points out exactly what the value of the program is. But of course a corporation is not a lab environment where you can just administer placebo training and real training to make the statistical validation, and at the same time controlling all other variables as the scientific approach prescribes. Compliance training for example (let's not go into the discussion if those have actual learning goals) is not something you give to some and not to others. But in the cases it is possible and makes sense, such comparision between those who did vs those who didn't is a strong piece of evidence.
Your 2 cents:
- Kind invitation to add your own views here.
2.4. High Impact Culture (Bersin)
Bersin is an independent consulting firm researching the practices, processes, structures, and systems that drive the greatest business impact. They came to the conclusion that high performing learning cultures are high impact organizations. In 2010 they published their findings and model the high impact learning culture around 7 culture elements and 40 best practises. You can read the PDF summary here for free.
They also trademarked the Bersin High Impact Learning Framework below. Just in the blink of an eye you spot terms such as 'KPI', 5 measurement areas including alignment and adoption, and the split between individual and organizational performance.
My 2 cents
- I'm going to repeat a thought of Hans here when he saw this framework and how a popular metrics solution (Metrics That Matter) implements it : "The thing that I thought was interesting was how the maturity of your measurement strategy is basically a function of how much your learning organization has moved towards performance consulting. How can you measure business impact if your planning and gap analysis isn’t close to the business?".
- Speaking of Metrics That Matter, that system and most others rely on surveys. Surveys tell us what people think or want to say the answer is. But if you can look at hard data instead of opinions, isn't that a better way?
- What I like in Bersin's approach is how they work with a holistic 'culture' and then break that down into proven differentiating elements and practises. The previous models did all build up from the interventions towards organisational benefits, but this approach starts with the performance ecosystem as I called it, or workscapes as the Internet Time Alliance calls it. It's not about the training programs, it is about getting the ecosystem right and making sure learning will happen when, where and by whom needed, to get the work done.
- If you use research similar to Bersin, you are going for common differentiators. Will that let you stand out from the crowd? How do you remake such study for your own context or is it not that different?
- My same thought as on the previous model stands on the overemphasis for individual performance as the only performance that counts is the one of the team/organisation.
Your 2 cents:
- Have another good look at the models, as there are many words on them. Anything that strikes a note?
Let's move to another authoritative research voice in the learning community. Gary Woodhill from Brandon Hall wrote a report last February called "Understanding Learning Analytics" and a report called "The Impact of Training on Participation, Performance, and Productivity". You can read the first page for free here. He does a fine job reminding us the fundamentals of measurement, the Kirkpatrick model, ROI etc. He also refreshes basic statistical understanding. But the most striking for me was his suggestion somewhere in the 50 pages to regard learning and development as a complex adaptive system. I like the thought, but the report stops short of actually bringing any practical hints to the table on how to do that.
From the report and the book Simply Complexity: A Clear Guide to Complexity Theory by Neil Johnson, the outlines of a complex adaptive system are:
- The system contains a collection of many interacting objects or ―agents.
- The behavior of these objects is affected by memory or ―feedback.
- The objects can adapt their strategies according to their history.
- The system is typically ―open.
- The system appears to be ―alive.
- The system exhibits emergent phenomena that are generally surprising and may be extreme.
- The emergent phenomena typically arise in the absence of any sort of ―invisible hand or central controller.
- The system shows a complicated mix of ordered and disordered behavior.
One section of the report is entitled 'what you measure depends on your theory of learning' and goes into typical 'world views' on training such as the behavioral view (performance!), cognitive view (brain functions), constructivist view (personal connections).
My 2 cents:
- I like the suggestion to look upon learning as a complex adaptive system rather than a cause/effect chain. But as long as we don't have practical ways to track impact in such a paradigm, this thought will remain in the category of 'seems right'.
- I do need to brush up my statistical skills for the future of learning measurement in ecosystems or complex adaptive systems will be much more on isolating KPIs and finding statistical correlations between competence building activities and business results than calculating the average of a satisfaction survey.
- The impact we expect, the questions we ask and the metrics we use to answer those will indeed depend on a 'world view' on learning. Event? Process? Flow? Ecosystem (eg complex adaptive system)? Network? The good thing is that those world views will have more impact on the operational side, but in a business the ultimate metric is very clear (hint: you can buy stuff with it).
Your 2 cents:
- You probably need to buy the full report to have your full 2 cents, but give it a go anyway...
2.6. Six disciplines of breakthrough learning (Fort Hill)
I've been informed/spammed the last months on ASTD's Learning Transfer Conference.The concept of transfer has always taken a big place in learning land, as it is considered critical that learning transfers to your permanent potential to apply it. This particular conference is actually based on the book 'Six disciplines of breakthrough learning'. You can read a very interesting book review (on the first edition) on the blog of Will Thalheimer. Will lists the 20 recommendations that strike him most. In a nutshell, the book is about 6 D's. And since the underlying thought is transfer, a lot of focus goes into the afterlife of a training event and making sure transfer takes place by designing an end-to-end experience targeted at action aligned with business terms. Who can be against that? They see learning as a PROCESS, not as an event.
1. Define Outcomes in Business Terms
2. Design the Complete Experience
3. Deliver for Application
4. Drive Follow-Through
5. Deploy Active Support
6. Document Results
The authors of the book started the company Fort Hill, that sells the ResultsEngine tool. Janet Clarey's blog post gives a bit more insights into the workings of the tool : "It’s actually reminder-driven. It sends updates to participants, reminding learners of objectives, prompting reflection and action, and providing specific content. The system is designed to keep learning top of mind so follow-through can occur."
My 2 cents:
- Learning and development are fun and interesting at best when not transferred into a valuable afterlife, but without business value. The training afterlife is not so much about the transfer into your competencies, but the ultimate transfer into valuable performance or valuable behavior. Your read it right, not just any performance or behavior will do, but the kind people appreciate with money.
- I like the dashboard approach, and how that shows action.
- I also like the reminder approach as development is indeed a continous process. I never used this particular system, but I can only hope that I can set these reminders myself. Otherwise, it's just employer spamming.... I want to have a say in how reminders are set, as it is my learning.
- The model still focuses heavily on content and formal approaches.
Your 2 cents:
- Five cents is fine too.
Cognos is an analytics and business intelligence middleware. One of its products is the Cognos Workforce Performance tool targetted at getting to metrics to answer talent management questions. The software provides metrics and dashboards around specific talent areas and lists the typical questions in that area and the metrics to answer those questions. Below are questions and metrics on the 'development' area in the talent management suite.
My 2 cents:
- A business intelligence suite is not only about learning but integrates the different talent management areas (selection, performance, workforce, rewards, succession, etc.) and other business areas (if you buy the other packages in the suite). Indeed, looking at learning on its own is so last century...
- I like the approach of first asking the questions, and then looking at the metrics to answer those. It makes the metrics matter and action oriented.
- When I look at the questions and metrics typically in the 'learning' part of these suites, I get the feeling they are all operational metrics, and don't stretch enough into real business impact.
Your 2 cents:
- You think that...
2.8. Success Case Method (Brinkerhoff)
Tom Gram suggests to use the Success Case Method over Kirkpatrick. It is a simple and fast method developed by Robert Brinkerhoff and goes like this: (as taken from Tom's blog):
- Step 1. Identify targeted business goals and impact expectations
- Step 2. Survey a large representative sample of all participants in a program to identify high impact and low impact cases
- Step 3. Analyze the survey data to identify: a small group of successful participants, a small group unsuccessful participants
- Step 4. Conduct in-depth interviews with the two selected groups to: document the nature and business value of their application of learning, identify the performance factors that supported learning application and obstacles that prevented it.
- Step 5. Document and disseminate the story, report impact, applaud successes, use data to educate managers and organization
My 2 cents:
- I like the simple and fast nature of the method, as we know that simple to apply models have more chance of getting done. In the end, performance or 'getting the job done' applies to measurement and evaluation as well.
- It reminds me of the charts of the IBM Global Sales School above: did a particular something (like a learning program, or membership in a community, or the fact you blog, or anything - this method is not limited to formal events or even to learning alone) add to performance or not? You indeed do not need to track the entire population for that. Small samples of success vs failure can do that job.
- This model goes also for quantitative evidence gathering, rather than for hard figures. Any type of evidence counts, as long as it is evidence that relates to performance. It does not have to be an ROI figure.
Your 2 cents:
- Would you consider using this model? What's the good and bad?
2.9. The Big Question (blogosphere)
How do you assess whether your informal learning, social learning, continuous learning, performance support initiatives have the desired impact or achieve the desired results?
The Big Question edition of March 2011 on assessing informal learning generated many interesting blog posts and comments. I encourage you to skim them all.
One particular post in this series from Harold Jarche and Jay Cross points out the time spaced nature of assessment (wait a bit, learning has enforced impact over time when reflected upon and applied), talks about measuring activity vs outcome, validation vs evaluation, Performance Objectives, ... Ryan2pointo comments: "For me, the whole point of learning & development initiatives is to support performance - so regardless of the mode or approach taken to learn something, the assessment of its impact ultimately rests on the performance stats (whatever they may be)". Clive doesn't get it (but actually he does).
Kasper Spiro suggests 'output learning'. To repeat his words: "In a nutshell it works like this. You encounter a problem in the workspace , then you set your learning objectives (that lead to tackling the negative effect of the problem), determine the requirements that set the boundaries for that solution and then the worker/learner is free to solve his problem anyway he wants, as long as he stays within the boundaries set by the requirements." He also lists the following techniques as 'how' to do it: log, survey, web analytics, user generated input.
Clark Quinns reflections are on his learnlets blog. A quote from that post: "Frankly, the problem with Kirkpatrick (sort of like with LMS’ and ADDIE, *drink*) is not in the concept, but in the execution." He also explicitly links to action in the form of "keep, tweak or kill" and suggests to set thresholds that trigger those actions. He also suggests that the thresholds might change over time.
My 2 cents:
- Blogosphere clearly goes for performance and behavior as the ultimate goal, and maybe the only one to worry about. That is the same for formal, informal and besides-learning.
- I like Clark's focus on action and embedding that in up front. What else do we measure for if not to act upon it? Building on that thought, you'd need a swat team that gets called into action when an alarm bell rings. (Maybe using Issue Based Consulting techniques.)
- In a fast, flat, small, spiky and blurry world, any impact framework needs a build-in continuous tuning process to see if it is still aligned to what matters, if the alert levels have changed, if better indicators have emerged, etc.
Your 2 cents:
- Dig deep in blogosphere and find other gold in the many articles written on the subject above or on learning impact in general.
2.10. Value Network Analysis
It was also a blog (from Harold Jarche) that triggered my attention for value networks. You may be familiar with Social Network Analysis and tools that visualize people and their relationships, like the one below generated via ToughGraph. Value network analysis is similar, but it takes roles in the nodes rather than people and the links depict deliverables. These links have a direction, and are solid (tangible deliverable) or dashed (intangible deliverable).
A good, quick and basic intro to Value Network Analysis is the article by Patti Anklam who organizes workshops on the topic. Below is a sample value network from her. The mapping method is invented by Verna Allee, and you can find more information in her online (free) book "Value Networks and the True Nature of Collaboration". One particular chapter of that book goes into performance indicators that are derived from the value network analysis. You'll find a table of typical performance indicators there for inspiration.
My 2 cents:
- This kind of analysis tackles the fuzziness of 'relationships' within a business process and as such adds insights that can be used for action. I think this or a similar method will be a valuable tool in a social business.
- As what matters is really team performance versus individual performance, we need insights in the collaboration and essential social interactions. Tools like Social Network Analysis and Value Network Analysis complete the process picture. Inge has listed for example a few learning applications to SNA, and I can see learning roles (including mentors or peers) in the value network map as well. As such, you get a holistic view on where learning roles play in the bigger picture and what deliverables it passes or receives.
- This model also recognises the many intanglibles.
- SNA and VNA maps are strong visuals. A picture says a few thousand words. It does not spit out numbers or thresholds.
Your 2 cents:
- Have a cherry...
2.11. And more...
I'm going to stop here, but you are welcome to continue picking cherries and good ideas and angles from other models. Here is a list for further exploration, and you can add some of your own...
- Balanced Scorecard: a popular performance management tool in business schools and one of the most adopted in general business, but can equally be applied for learning dashboards.
- Learning portfolios are a popular tool in institutions, partly because of their assessment value. I haven't encountered them in a corporate setting. What's the good and bad on learning portfolios?
- Read about the WPP case halfway this Bersin blog article. Anecdotal stories count in as evidence for the higher levels. Nobody ever said only ROI or a number counts as evidence...
- The book 'The Ultimate Question' and the concept of Net Promoter Score
- Electronic Performance Support Systems (EPSS) and their business case and how they link into learning's 5 moments of need.
- Systems for peer-evaluation, such as mixtent.com and their 'SkillRank' approach (rings a bell with Google's PageRank).
- Etc.
At this point, my mind is overloaded with models and I suspect so is yours. We have covered a lot of models, frameworks, methodologies and approaches to learning and impact/measurement of learning. Some regard learning as events, other as processes and yet others as ecosystems or social/value networks. Metrics can come via surveys or (statistical) data analysis or anecdotes. Some models feel conceptually more 'right' than others, but may be more difficult to implement. And there is really a LOT to potentially measure.
But for all the variance in suggestions, one word stands out on the above list: performance. Learning only generates impact when aligned with the overall business goal and that goal is performance. Otherwise it might as well never have happened. That links us back to the quote at the start of this article: learning does not HAVE value. It might GET it, through behavior change, through performance, within context.
So we'll end this divergence part in the series with a mind full. In the next post we'll converge again into a model for the pragmatic and one for the revolutionary.
Move on to Part III a >>
No comments:
Post a Comment