Expertise

The Versatilist on Dunning-Kruger

You may not be familiar with the “Dunning-Kruger Effect”; or, you may have only heard the colloquial explanation that “stupid people are too stupid to know they’re stupid”, most humorously explained by Monty Python alum John Cleese here.

In reality, if you read the Wikipedia entry or the actual research, what Dunning and Kruger discovered is that human beings have the tendency to rate our ability, in almost anything, to be at, or slightly above, the average of all people. This is not only an obvious impossibility, it also has some interesting ramifications.

The first, oft repeated, implication is that those with the least capability tend to overestimate their capability the most. That is to say, if we assume 50% is the “average ability” across the population, those with 0% actual capability will overestimate their ability by 50%, (or more) while those with 40% only overestimate by 10%. One explanation for this is that the skills necessary to evaluate capability are exactly the same skills necessary to have the capability; i.e., if you don’t know what you are doing, it is difficult to evaluate that you, or someone else, is doing it wrong. My favorite example of this would be something like English grammar or punctuation: if you don’t have a firm grasp of it, it is impossible for you to evaluate how well you, or someone else, is performing. You must know, in order to evaluate. This is where the “too stupid to know” comes from.

The second, much less discussed, implication is that those with the most capability tend to underestimate their knowledge and competence. Back to the 50% scale, if someone actually performs at the 80 or 90% level, they tend to severely underestimate their performance. This is frequently cited as a contributing factor to imposter syndrome, where those with superior capability don’t necessarily believe they are superior. I attribute this to the colloquial definition of an expert as someone who knows more and more, about less and less (purportedly coined by one of the Drs’ Mayo of Mayo Clinic fame). An extension of this says that an expert is someone who knows more and more, about less and less, until they know absolutely everything about nothing. While this was likely meant to be more humorous than anything, there is a certain kind of meta, philosophical element to it as the process of discovering more and more about an increasingly smaller area of expertise also has the tendency to make it obvious how little you really know about anything else. Experts, while becoming more knowledgeable about their area of expertise, become increasingly cognizant of how little they really know elsewhere.

In either of these situations, overestimating or underestimating, the challenge is that self-reported capability is a very poor predictor of actual ability; and, if you really need an expert because you aren’t one, it is very unlikely you will be able to determine if someone else is one or not.

Hedging Your Bets

Why am I going on about the Dunning-Kruger effect? I point out this well-known characteristic because it touches on my area of expertise … determining the best way to assess expertise, particularly when it comes to augmenting your organization’s capabilities; i.e. this is something we need to think about when we hire people. We need to take this into account and develop strategies to “hedge our bets”.

While resumes are useful, we all know that just because you’ve done something in the past, doesn’t mean you are actually any good at it; and, resumes, although not necessarily outright false, are generally over inflated. Some of this is smart marketing on behalf of the candidate, but some may very well be that the candidate actually believes they are more adept than they are. On the flip side, that expert you’re looking for may be a lot less comfortable touting expertise they don’t feel they actually have. Resumes and interviews are useful, but woefully inadequate and imprecise.

One way to address this is to ensure that the screening/interview process involves some kind of valid psychometric assessment of ability (like respected certifications and licensure) and/or the direct involvement of someone who you know has the appropriate skills to assess the candidate’s ability (if you can find one). You can’t rely on self-reported capability, and you can’t expect someone without that capability to evaluate candidate’s capability … even in the screening process.

Another, perhaps easier, way to hedge your bets is to broaden your horizons. When we post job opportunities, we frequently over estimate the skills required, producing a “wish list” that values “specific” experience over diversity of experience (as I’ve discussed here: Would You Hire Me?). However, if we limit ourselves to one dimension, it can be hard to determine what a candidate’s true capabilities are. If, instead, we look for people who have been successful or demonstrate knowledge of multiple domains, backed by work experience, we may get a better estimation of their knowledge on specific domains. That is to say, a Versatilist, with a broader set of knowledge in multiple domains, is more likely to underestimate their specific domain knowledge than overestimate it. If this doesn’t cause you to overlook these candidates, the only downside is that you may get more than you knew, not less.

Don’t be too stupid to know you’re stupid

The Dunning-Kruger effect is just another factor hindering employers from finding the best people. We all think we are better at everything, including evaluating prospective employees, than we generally are; and, the very people we want are likely to be overlooked because they undersell their capabilities. Using other, valid qualitative criteria like certifications certainly helps, and including experts, instead of AI engines and unqualified HR personnel, in the screening and interview process would also be beneficial.

For my money, until I find a way to fund continued research into better ways, I’ll continue to look for those Versatilists out there who have knowledge and experience, and likely undervalue their true capability.

Google Does Not Obviate “Knowing”

There is a strange notion making the rounds of social media in various forms, used to argue against traditional learning and assessment standards.  This reoccurring theme suggests the ubiquitous ability to leverage Google search, Wikipedia, or other online resources to find answers obviates the need to learn anything for yourself.  I.e., if we need to know something, we can just look it up in real-time and don’t need to waste time learning this information before we need it.  This theme has come up in discussions of our educational system curriculum, the supposed uselessness of standardized testing, and even in employee assessment criteria.

The Internet was never intended to be a replacement for independent knowledge.

Perhaps this is a special case of the Dunning-Kruger effect (Dunning, Johnson, Ehrlinger, & Kruger, 2003; Kruger & Dunning, 1999), but there are at least two clear reasons why access to knowledge is not equivalent to actually knowing it.  The first is a complete disconnect from the way human beings develop skill and competency.  The second is the assumption real-time knowledge, although ubiquitous, is accurate and will always be available.

Having Facts is Not “Knowing”

The most incongruous part of this idea is the assumption that knowledge is the result of just having a bunch of facts.  Thus, if you can just look up the facts, you have knowledge.  Unfortunately, unlike in the Matrix, human beings cannot simply download competence and expertise.

Learning something, and becoming good at it, is a process of building mental models on top of the foundation of rote facts

The study of experts and expert knowledge has well established the difference between experts and novices is not in what they know (the facts), but in how they apply those facts. It is based on how each fact fits with other facts or other pieces of knowledge. Expertise is the result of a process of integrating facts, context, and experience together and defining more refined and efficient mental models (Ericsson, 2006).  Learning something, and becoming good at it, is a process of building mental models on top of the foundation of rote facts.  This cannot be done without internalizing those facts.

In addition, returning to Dunning-Kruger, without building competence, individuals are incapable of discerning the veracity of individual facts.  Our ability to understand whether information is accurate, or of any substance, results from being able to rectify new information with our existing mental models and knowledge.  Those with less competence are the most unable to evaluate this information making them the most susceptible to not only accepting incorrect information as fact, but also of developing mental models incorrectly reflecting reality.

Limits of Ubiquitous Knowledge Access

Although those of use living in developed economies take ubiquitous access to knowledge for granted, this is not the case for all human beings, nor is it guaranteed to always exist.  It is estimated only about 50% of the world’s population is connected to the Internet, over two-thirds of which are in developed economies.  Even these figures bear further investigation, as those in developing countries with Internet access are far more likely to be connected by slower, less reliable means keeping their access from being truly ubiquitous.  Furthermore, while China contributes significantly to the world’s total Internet users, the Chinese government does not allow full, unrestricted access to the knowledge available via the Internet.  This leaves the number of people with true, ubiquitous access well below 50% of the population.

Even for those of us fortunate enough to have nearly ubiquitous access to an unrestricted Internet of knowledge, access is fragile.  Power outages as a result of simple failure, natural events, or even direct malice, can immediately render information inaccessible.  Emergency situations where survival might rely on knowledge also often exist outside the bounds of this seemingly ubiquitous access. Without a charge, or cellular connection, many find themselves ill-equipped to manage.

Dumbing Down our Society

The idea that access to knowledge is the same as having knowledge portends a loss of intellectual capital.  Whereas societies in the past have maintained control by limiting access to information, we are creating a future where control is maintained by delegitimizing and devaluing the accumulation of knowledge through full access to information.  We are positioning society to fail in the future because they will have not only become dependent on being spoon-fed information instead of actual learning, but will have also lost the ability to differentiate fact from fiction.

Not only is the idea that access to knowledge equates to having knowledge founded on shaky foundations lacking any kind of empirical basis, it undermines the actual development of knowledge

Although it would be nice to assume this is a dystopian view of the future, we are already seeing the effects of this process.  As social media becomes increasingly the way our society views the world around us, we can already see how ubiquitous access to information is affecting our perceptions of the world around us.  Without the ability to think critically, something only developed through the accumulation of knowledge and experience, in evaluating the real-time information we receive, our society is being manipulated into perspectives not of our own choosing, but the choosing of others.  We are losing the ability to process the information we receive and find ourselves increasingly caught in echo-chambers only presenting information supporting potentially incorrect world-views.

The Internet was never intended to be a replacement for independent knowledge.  It was developed to expand our ability to access information in the pursuit of developing knowledge and capability.  Not only is the idea that access to knowledge equates to having knowledge founded on shaky foundations lacking any kind of empirical basis, it undermines the actual development of knowledge.

 

 

Resources

Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science, 12(3), 83–87. http://doi.org/10.1111/1467-8721.01235

Ericsson, K. A. (2006). An introduction to Cambridge handbook of expertise and expert performance: Its development, organization, and content. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It : How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated. Journal of Personnality and Social Psychology, 77(6), 1121–1134. http://doi.org/10.1037/0022-3514.77.6.1121

 

Improving Multiple-Choice Assessments by Limiting Time

Standardized, multiple-choice assessments frequently come under fire because they test rote skills, rather than practical, real-world application.  Although this is a gross over-generalization failing to account for the cognitive-complexity the items (questions) are written to, standardized assessments are designed to evaluate what a person knows, not how well they can apply it.  If that were the end of the discussion, you could be forgiven in assuming standardized testing is poor at predicting real-world performance or differentiating between novices and more seasoned, experienced practitioners.  However, there is another component that, when added to standardized testing, can raise assessments to a higher level: time.  Time, or more precisely, control over the amount of time allowed to perform the exam, can be highly effective in differentiating between competence and non-competence.

The Science Bit

Research in the field of expertise and expert performance suggests experts not only have the capacity to know more, they also know in a way differently than non-experts; experts exhibit different mental models than novices (Feltovich, Prietula, & Ericsson, 2006).  Mental models represent how individuals organize and implement knowledge, instead of explicitly determining what that knowledge encompasses.  Novice practitioners start with mental models representing the most basic elements of the knowledge required within a domain, and their mental models gradually gain complexity and refinement as the novice gains practical experience applying those models in real world performance (Chase & Simon, 1973; Chi, Glaser, & Rees, 1982; Gogus, 2013; Insch, McIntyre, & Dawley, 2008; Schack, 2004).

While Chase and Simon (1973) first theorized that the way experts chunk and sequence information mediated their superior performance, Feltovich et al. (2006) suggested these changes facilitated experts processing more information faster and with less cognitive effort contributing to greater performance. Feltovich et al. (2006) noted this effect as one of the best-established characteristics of expertise and demonstrated in numerous knowledge domains including chess, bridge, electronics, physics problem solving, and medical applications.

For example, Chi et al. (1982) determined that the way novices and experts approach problem-solving in advanced physics was significantly different despite all subjects having the same actual knowledge necessary for the problem solution; novices focused on surface details while experts approached problems from a deeper, theoretical perspective.  Chi et al. also demonstrated the novice’s lack of experience and practical application contributed to errors in problem analysis requiring more time and effort to overcome. While the base knowledge of experts and novices may not differ significantly, experts appear to approach problem solving from a differentiated perspective allowing them more success in applying correct solutions the first time and recovering faster when initial solutions fail.

In that vein of thought, Gogus (2013) demonstrated that expert models were highly interconnected and complex in nature, representing how experience allowed experts the application of greater amounts of knowledge in problem solving.  The ability for applying existing knowledge with greater efficiency augments the difference in problem-solving strategy demonstrated by Chi et al. (1982).  Whereas novices apply problem-solving approaches linearly one at a time, experts evaluate multiple approaches simultaneously in determining the most appropriate course of action.

Achieving expertise is, therefore, not simply a matter of accumulating knowledge and skills, but a complex transformation of the way experts implement that knowledge and skill (Feltovich et al., 2006). This distinction provides clues into better implementing assessments to differentiate between expert and novice: the time it takes to complete an assessment.

Cool Real-World Example Using Football (Sorry. Soccer)

In an interesting twist on typical mental model assessment studies, Lex, Essig, Knoblauch, and Schack (2015) asked novice and experienced soccer players to quickly and accurately decide the best choice of tactics (either “a” or “b”) given a video image of a simulated game situation.  Lex et al. used eye-tracking systems to measure how the participants reviewed the image, as well as measuring their accuracy and response time.  As one would expect, the more experienced players were both more accurate in their responses, as well as quicker. Somewhat surprising was the reason experienced players performed faster.

While Lex et al. (2015) determined both sets of players fixated on individual pixels in the image for nearly the same amount of time, experienced players had less fixations and observed less pixels overall.   Less experienced players needed to review more of the image before deciding, and were still more likely to make incorrect decisions.  On the other hand, more experienced players, although not perfect, made more accurate decisions based on less information.  The difference in performance was not attributable to differences in basic understanding of tactics or playing soccer, but the ability of experienced players to make better decisions with less information and taking less time.

The Takeaway

Multiple-choice, standardized assessments are principally designed to differentiate what people know, with limited ability to differentiate how well they can apply that knowledge in the real world.  Yet, it is also well-established that competent performers have numerous advantages leading to better performance in less time.    If time constraints are actively and responsibly constructed as an integral component of these assessments, they may well achieve better predictive performance; they could do a much better job of evaluating not just what someone knows, but how well they can apply it.

 

References

Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In Visual Information Processing (pp. 215–281). New York, NY: Academic Press, Inc. http://doi.org/10.1016/B978-0-12-170150-5.50011-1

Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7–75). Hillsdale: Lawrence Erlbaum Associates.

Feltovich, P. J., Prietula, M. J., & Ericsson, K. A. (2006). Studies of expertise from psychological perspectives. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Gogus, A. (2013). Evaluating mental models in mathematics: A comparison of methods. Educational Technology Research and Development, 61(2), 171–195. http://doi.org/10.1007/s11423-012-9281-2

Insch, G. S., McIntyre, N., & Dawley, D. (2008). Tacit Knowledge: A Refinement and Empirical Test of the Academic Tacit Knowledge Scale. The Journal of Psychology, 142(6), 561–579. http://doi.org/10.3200/jrlp.142.6.561-580

Lex, H., Essig, K., Knoblauch, A., & Schack, T. (2015). Cognitive Representations and Cognitive Processing of Team-Specific Tactics in Soccer. PLoS ONE, 10(2), 1–19. http://doi.org/10.1371/journal.pone.0118219

Schack, T. (2004). Knowledge and performance in action. Journal of Knowledge Management, 8(4), 38–53. http://doi.org/10.1108/13673270410548478

The Versatilist Vs. the Peter Principle

It is surprising how few are familiar with the Peter Principle.  This is most disturbing in the areas of business and organizational psychology as it speaks directly to the source of innumerable challenges for organizational success.  It should be a risk factor in talent management, succession planning, and organizational compensation systems.  Most of all, organizations should look at ways to circumvent this process; most notably, organizations should consider how Verstalists can thwart the Peter Principle.

The Peter Principle

“In a Hierarchy Every Employee Tends to Rise to His [Her] Level of Incompetence” (Peter & Hull, 1969, p. 25).

In action, the Peter Principle states a simple inevitability.  If you are good at your job, you get promoted.  If you are good at the new job, you get promoted again.  This continues until you take on a job for which you are not well suited; i.e. incompetent.  Having reached your level of incompetence, you no longer get promoted, but stay within the job you are least capable of performing well.   Taken to its ultimate conclusion, organizations eventually become dominated by leaders with the least capability to do their job.

Although published originally in 1969 as a tongue-in-cheek exposition on incompetence within human organizations, and supported by fictitious research, the Peter Principle continues to be debated amongst practitioners and academics.  It has been lambasted as unscientific (something it never purported to be) and crass overgeneralization, as well as an insightful source of legitimate inquiry.  The staying power of the Peter Principle maybe that its simplicity and succinctness, aligns with human experience and explains why so many organizations manage to do such stupid things.

The Value of the Peter Principle Perspective

Despite the limited academic basis for the Peter Principle, it manages to highlight a particular problem – why do we use promotion as a means of reward for competent performance (Fairburn & Malcomson, 2001)?  Doing so fails to consider two very fundamental truths: competence is domain specific, and management is a very specific skill. These fundamental truths conflict with the way most organizations reward and promote people.  Failure to acknowledge these promotes inefficiency, turmoil, and perpetuates the validity of the Peter Principle.

First, few organizations design their job families to reward and promote people for simply getting better and more efficient at the job they do.  Moving from job-specialist level 1, to job-specialist level 2, often requires doing different things, instead of doing the same things better.  We reward people, not for being good at their job, but for taking on new roles they have never done before, not proven they are capable of, and promoting constant change rather than long-term competence.   As soon as people demonstrate competence, we move them.

Second, organizations fail to realize management and leadership skills as a unique job all to themselves.  Being a good engineer says nothing about your ability to be a good engineering manager; however, good leadership skills can be a boon regardless of the function or industry. While understanding the jobs people need to perform is beneficial, leaders do not have to be competent in all the job functions they lead.  Promoting people who are competent in their job but shown no competence for leadership to positions of leadership, once again, promotes inefficiency and disruption.  Not only do you lose a competent performer in their prior role, but you may very well promote incompetent leadership.

Versatilists to the Rescue

Versatilists rarely run afoul of the Peter Principle.  First, versatilists are rarely promoted very high within most organizations, because they do not stay within any specific domain very long (something HR departments seem to think predicts success).  Second, because versatilists are deeply knowledgeable about many domains, they are keenly aware of what they are, and more importantly are not, capable of doing.  As such, versatilists without the desire or capability to lead will not pursue those opportunities.  Versatilists could be the savior for organizations looking to thwart the Peter Principle, but it will require HR to change their perspective on talent acquisition and development.

In terms of talent acquisition, HR and recruiting need to look beyond the experience requirements they believe are required for a job, and begin looking at the actual skills.  Far too often, organizations are looking for years of single domain experience (like engineering and software development) for roles that don’t necessarily require that experience (like leading engineering and software development teams).   The skills themselves are more important than the domain in which they were developed.   This is important for strategic innovation in particular, where having new perspectives brought to the job can be highly valuable.  A versatilists with leadership capability can quickly adapt to new industries and environments, while also bringing a host of new skills.

HR/Recruiting should also consider the quantity and quality of performance, rather than simply the length of performance, when looking at promotions or new hires.  Comparing two candidates for a position, a candidate who has shown success in multiple assignments and multiple environments over numerous years, should be preferred to one that has shown success in a single domain over the same time.  The candidate with multiple, differentiated success is much more likely to be successful in the new job as well; the one in a single domain is ripe to be reaching their level of incompetence.  Success in adapting to new environments is a skill companies should value, but don’t.

In terms of talent development, HR needs to create ways of rewarding specialists who do their job increasingly well over years of dedication without using promotions, while appreciating the versatilists who thrive in taking on new roles.  Promotions should not be the only means of rewarding top performers; bonuses and incentives should be used to drive continued competence building.  Promotions should only be used to expand and diversify the experiences of those already proving their ability to adapt and succeed in new roles.   HR needs to look beyond narrow definitions to find the people most likely to succeed, not those that have just been doing it longer.

As companies continue to struggle with market volatility, disruptive innovation, and dramatic shifts in business models, versatilism should be the new standard of performance.  What good is someone who has ten years’ experience in business models and practices that no longer hold true? Perhaps a new principle, the Versatilist Veracity should succeed the Peter Principle:

“Without Versatilists, a Hierarchy Tends to Become Incompetent”

 

References

Fairburn, J. a, & Malcomson, J. M. (2001). Performance, promotion, and the Peter Principle. Review of Economic Studies, 68(1), 45–66. http://doi.org/10.1111/1467-937X.00159

Peter, L. J., & Hull, R. (1969). The Peter Principle. Cutchogue, N.Y.: Willima Morrow & Co., Inc.

NetFlix: Practical Examples of People, Process, and Culture in Creating Innovation

In keeping with the traditional analysis of innovative success, a post hoc examination of an organization known for innovation provides anecdotal evidence of the impact of people, process, and culture on organizational success.  By most accounts, Netflix, Inc. (Netflix) is considered a prime example of successful innovation.  Netflix is the leading Internet-based television network and counts some 44 million customers in 40 countries streaming more than one billion hours of content (Netflix, 2014).  Since its initial public offering in 2002, Netflix has moved from an innovative provider of DVD rentals-by-mail to become the dominant player in the pure-play Internet streaming market, a market they almost single-handedly pioneered (Netflix, 2014).  Netflix has enjoyed significant growth in market share, customers, and stock valuation over the years, and the stage has been set for further growth through international expansion, strategic partnerships, and the creation of their own content (Ramachandran, 2014a, 2014b, 2015; Ramachandran & Stynes, 2015; Schwartz, 2015). An examination of key elements of Netflix’s success suggest that business models is not the only thing that Netflix has innovated on the road to becoming a household name.

One of the better-known innovations of Netflix is the organizational culture playbook they developed.  Incorporating aspects of both people and culture, the Netflix employee handbook was presented in 127 presentation slides (McCord, 2014).  From a people perspective, Netflix purports to “hire, reward, and tolerate only fully formed adults” (2014, p. 4).  Among the corporate values are the courage to speak your mind, the ability to make sound independent judgments, being curious, and being innovative (Hastings, 2009).  Hastings also suggests “adequate performance gets a generous severance package” (slide 22).  Netflix not only states a goal to hire and retain only the best people, but also sets an upfront expectation of a culture that supports innovation.  A key element in the Netflix employee methodology is a management philosophy to create a culture of freedom and innovation with employee self-discipline and freedom eliminating many of typical corporate controls like performance reviews, bonuses, and managed vacation time (McCord, 2014).  According to the founder of Netflix, “we’ve had hundreds of years to work on managing industrial firms … we’re just beginning to learn how to run creative firms” (McCord, 2014, p. 6).  McCord reports the development of a culture conducive to innovation is seen as a primary responsibility of Netflix leadership.  Netflix, as an organization, demonstrates a commitment to attracting the right people (knowledge) and fostering a culture that fosters innovation.

Interestingly, the Netflix philosophy that supports the people and culture, eschews formalized process.  According to the Netflix culture definition, process is only required when the complexity of the business exceeds the capability of the people (Hastings, 2009, slide 47).  Hastings declares process, while useful for avoiding the chaos of increasingly large organizations, a limit to the flexibility of the organization to adapt as the business environment changes.  The Netflix response is to drive the percentage of high-performance employees faster than the rise of business complexity to maintain an informal and adaptable organization.   The apparent success of this approach calls into question whether process is independent of people and culture.  It is possible an innovative culture, or superior human capital, mediates the necessity of formalized processes for the diffusion of innovation throughout the organization. It is also possible that Netflix either lacks the need for existential knowledge that would benefit from more formalized approaches to knowledge development (Wilson & Doz, 2011), or that Netflix is simply ignoring the benefits of more formalized knowledge development approaches.  Whether superior knowledge resources and culture mediates the need for more formalized processes is a provocative notion that simply underscores how little is known about how to achieve successful innovation.

References

Hastings, R. (2009). Netflix culture: Freedom & responsibility. Retrieved August 8, 2015, from http://www.slideshare.net/reed2001/culture-1798664

McCord, P. (2014). How Netflix reinvented HR. Harvard Business Review, (JAN-FEB). Retrieved from http://hbr.org/

Netflix. (2014). 2013 annual report. Retrieved from http://ir.netflix.com/

Ramachandran, S. (2014a, November 18). Netflix sets its sights down under. Wall Street Journal (Online). Retrieved from http://www.wsj.com/

Ramachandran, S. (2014b, December 17). Dish Network to integrate Netflix app into its set-top boxes. Wall Street Journal (Online). Retrieved from http://www.wsj.com/

Ramachandran, S. (2015, February 4). Netflix to launch in Japan. Wall Street Journal (Online). Retrieved from http://www.wsj.com/

Ramachandran, S., & Stynes, T. (2015, January 20). Netflix steps up foreign expansion. Wall Street Journal (Online). Retrieved from http://www.wsj.com/

Schwartz, F. (2015, February 9). Netflix offers streaming video in Cuba. Wall Street Journal (Online). Retrieved from http://www.wsj.com/

Wilson, K., & Doz, Y. L. (2011). Agile innovation: A footprint balancing distance and immersion. California Management Review, 53(2), 6–26. http://doi.org/10.1525/cmr.2011.53.2.6

Using Mental Models to Identify Expertise

Research in the field of expertise and expert performance suggest experts not only have the capacity to know more, they also know it differently than non-experts; experts employ different mental models than novices (Feltovich, Prietula, & Ericsson, 2006). While it remains unclear how antecedents directly affect the generation of mental models, the relationship between mental models and performance is demonstrated across multiple domains of research (Chi, Glaser, & Rees, 1982; Feltovich et al., 2006). Unlike attempts to directly elicit the antecedents of performance that may, or may not, contribute to future performance, the mental models of experts show stable and reliable differences in expert performance without requiring the artificial constructs of tacit knowledge measurements (Frank, Land, & Schack, 2013; Land, Frank, & Schack, 2014; Lex, Essig, Knoblauch, & Schack, 2015; Schack, 2004, 2012; Schack & Mechsner, 2006). The potential to accurately, easily, and quantifiably define job-related expertise is an organizational opportunity for both the accumulation as well as the management of talent.

What are Mental Models

Based on information processing and cognitive science theories, mental models are the cognitive organization of knowledge in long-term memory (LTM) developed through learning and experience (Chase & Simon, 1973; Chi et al., 1982; Gogus, 2013; Insch, McIntyre, & Dawley, 2008; Schack, 2004).  Mental models represent how individuals organize and implement knowledge, instead of explicitly determining what that knowledge encompasses.  Novice practitioners start with mental models consisting of the most basic elements of knowledge required, and their mental models gradually gain complexity and refinement as the novice gains practical experience applying those models in the real world (Chase & Simon, 1973; Chi et al., 1982; Gogus, 2013; Insch et al., 2008; Schack, 2004).  Consequently, achieving expertise is not simply a matter of accumulating knowledge and skills, but a complex transformation of the way knowledge and skill is implemented (Feltovich et al., 2006).  This distinction, between what the individual knows and how the individual applies that knowledge has theoretical as well as practical importance for use in human assessment.

Mental models capture important aspects that plagued prior attempts to assess human capital performance.  In contrast to prior assessment methods, differences in mental models propose to demonstrate differences in the way individuals apply knowledge cognitively, rather than differences in the knowledge itself  (Chi et al., 1982; Gogus, 2013; Insch et al., 2008).   The significance of these findings is the implication of a measurable basis for the difference in performance between expert and novice, substantiating mental models as the quintessential construct defining the difference between the knowledge an individual has versus how the individual applies that knowledge.

Evaluating mental models from a practical perspective, mental models clearly differentiate between expert and non-experts. Chase and Simon (1973) first theorized that the way experts chunk and sequence information mediated their superior performance. Simon and Chase found grand master chess players’ superior performance resulted from recalling more complex information chunks.  These authors demonstrated that both experts and novices could recall the same number of chunks, but the chunks of novices contained single chess pieces whereas the chunks of experts contained meaningful chess positions composed of numerous pieces.  Simon and Chase further showed this superior performance to be context sensitive and domain specific as grand masters were no better than novices at recalling random, non-game specific piece constellations and showed no better performance in non-chess related memory.  The domain dependency indicates mental models of performance are not universal predictors but have job-related specificity making them ideal for assessment.

The observation that expert and novices store and access domain-specific knowledge differently spawned research theorizing quantitative, measurable differences in knowledge representation and organization might differentiate expert performance from non-expert performance (Ericsson, 2006). This research continues to substantiate increased experience and practice as the driver in the development of larger, more complex cognitive chunks (Feltovich et al., 2006). Feltovich et al. (2006) noted this effect as one of the best-established characteristics of expertise and demonstrated in numerous knowledge domains including chess, bridge, electronics, physics problem solving, and medical applications.  Feltovich et al. suggested these changes facilitated experts processing more information faster, with less cognitive effort thus contributing to greater performance.

Evolution of Mental Model Evaluation

The conceptualization of evaluating expert performance in academic and business domains already indicates the importance of mental model differences (Chi et al., 1982; Insch et al., 2008; Jafari, Akhavan, & Nourizadeh, 2013).  The general acceptance of mental models as a critical discriminator of performance has driven a deeper focus on the nature and structure of these differences instead of the specific knowledge they represent (Gogus, 2013).  This evolution of mental model evaluation, from a theoretical construct to a quantitative measure, mirrors the evolution away from what individuals know, towards how individuals utilize that knowledge.

Studies of expertise and expert performance demonstrate the dramatic differences in the way experts and novices organize knowledge in complex physics problem solving a (Chi et al., 1982). Chi et al. (1982) utilized cluster analysis to show differences in the way experts and novices structure their knowledge; however, mental models were only one of several ways in which the authors analyzed expert and novice differences.

Acknowledgment of these differences in mental representations rationalized the use of mental models in constructing more traditional tacit knowledge measures (Insch et al., 2008). Insch et al. (2008) approached tacit knowledge measures through evaluation of the actions individuals performed, acknowledging tacit knowledge was inherently how individuals use knowledge, not necessarily what knowledge they had.  In taking this approach, the authors focused on the mental schemas that directed behavior instead of the antecedent values, beliefs and skills that contribute to performance.  The focus on schemas as the driving factor in performance is notable as divergent from prior tacit knowledge measures; however, Insch et al. did not attempt measuring and comparing resultant mental models explicitly.

More recently, Jafari et al. (2013) looked to elicit and visualize the tacit knowledge of Iranian autoworkers concerning their knowledge of organizational strengths.  The uniqueness of this study was the use of quantifiable measures of individual tacit knowledge for comparison between groups of individuals and purported experts, as well as the use of graphs to visualize the results for each group.  Jafari et al. stipulated differences in mental models as an indication of differences between novice and expert workers but focused on the content rather than the structure of the mental model.  The authors further operationalized the quantitative measures as differences in what the individuals knew, and not how they utilized or implemented the knowledge. This approach advanced the use of mental models in the identification of expert knowledge, yet failed to identify how these models differ regarding application or structure.

Other researchers focused more on the differences in comparative mental models than the specific knowledge represented within the models (Gogus, 2013).  In evaluating the applicability and reliability of different methods of eliciting and comparing mental models, Gogus (2013) suggested the theoretical and methodological approach to the analysis of mental models is independent of the domain of knowledge.  Gogus replicated and contrasted the use of two different methodologies for externalization and measurement of mental model differences.  Of particular note, the author focused on contrasting the features of mental models instead of on the specific knowledge, experience, attitude, beliefs, or values of participants.  These efforts further support differences in mental models as being more dependent on the tacit rather than explicit knowledge of the individual. Since mental models are inherently domain specific and often contain the same base explicit knowledge, structural differences in the mental models between experts and novices are more indicative of the differences in performance.

Research in the area of sports psychology has similarly focused on developing reliable means of differentiating mental models of individuals to differentiate performance and diagnose performance problems.  Distinct differences in the mental models between experts and novices have been documented across multiple action-oriented skills including tennis (Schack & Mechsner, 2006), soccer (Lex et al., 2015), volleyball (Schack, 2012), and golf (Frank et al., 2013; Land et al., 2014). Schack and Mechsner (2006) demonstrated how differences in the mental models of the tennis serve related to the level of expertise.  Lex et al. (2015) evaluated the differences in the mental models of team-specific tactics between players of varying levels of experience.  Less experienced players averaging 3.2 years of experience (n = 20, SD = 4.2) generated mental models viewing team-tactics broadly as either offensive or defensive.  More experienced players averaging 17.3 years of experience (n = 18, SD = 3.3), further differentiated offensive and defensive tactics into smaller groups of related actions.  For instance, more experienced players further segmented defensive tactics into actions for pressing the offense, and returning to standard defense.

Focus on the specific differences in the structure of mental models has not only proven effective in differentiating expert and novice performance but also provided insight into effective training regimens (Frank et al., 2013; Land et al., 2014; Weigelt, Ahlmeyer, Lex, & Schack, 2011).  Frank et al. (2013) compared the models of novice performers to those of experts prior to and following a training intervention.  The authors experimentally evaluated two randomly assigned groups of participants with no former experience in performing a golf putt.  With the exception of an initial training video provided to all participants, none received any training or feedback.  The experimental group participated in self-directed practice over a three-day period, while the control group did not practice at all.  Frank et al. found the mental models of participants subjected to practice evolved, becoming more similar to expert mental models than participants in the control group.  Since the formal knowledge of all participants remained the same, the outcome of this study further suggests the structure of individual mental models is dependent on the experience and tacit knowledge of the individual.

The Opportunity

The use of mental models to identify expertise shows great promise. Variations in mental model construction differentiate clearly between expert and novice performers across numerous domains of knowledge.  Furthermore, methodologies highlighting the structural differences between the mental models of experts and novices show promise in the development and evaluation of training regimens.  As a result, the development of human capital assessments based on the measurement of the structural differences between mental models represents a strategic opportunity for organizations to improve the quality of human capital selection as well as the development and assessment of existing human capital.

References

Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In Visual Information Processing (pp. 215–281). New York, NY: Academic Press, Inc. http://doi.org/10.1016/B978-0-12-170150-5.50011-1

Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7–75). Hillsdale: Lawrence Erlbaum Associates.

Ericsson, K. A. (2006). An introduction to Cambridge handbook of expertise and expert performance: Its development, organization, and content. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Feltovich, P. J., Prietula, M. J., & Ericsson, K. A. (2006). Studies of expertise from psychological perspectives. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Frank, C., Land, W., & Schack, T. (2013). Mental representation and learning: The influence of practice on the development of mental representation structure in complex action. Psychology of Sport and Exercise, 14(3), 353–361. http://doi.org/10.1016/j.psychsport.2012.12.001

Gogus, A. (2013). Evaluating mental models in mathematics: A comparison of methods. Educational Technology Research and Development, 61(2), 171–195. http://doi.org/10.1007/s11423-012-9281-2

Insch, G. S., McIntyre, N., & Dawley, D. (2008). Tacit Knowledge: A Refinement and Empirical Test of the Academic Tacit Knowledge Scale. The Journal of Psychology, 142(6), 561–579. http://doi.org/10.3200/jrlp.142.6.561-580

Jafari, M., Akhavan, P., & Nourizadeh, M. (2013). Classification of human resources based on measurement of tacit knowledge. The Journal of Management Development, 32(4), 376–403. http://doi.org/http://dx.doi.org/10.1108/02621711311326374

Land, W. M., Frank, C., & Schack, T. (2014). The influence of attentional focus on the development of skill representation in a complex action. Psychology of Sport and Exercise, 15(1), 30–38. http://doi.org/10.1016/j.psychsport.2013.09.006

Lex, H., Essig, K., Knoblauch, A., & Schack, T. (2015). Cognitive Representations and Cognitive Processing of Team-Specific Tactics in Soccer. PLoS ONE, 10(2), 1–19. http://doi.org/10.1371/journal.pone.0118219

Schack, T. (2004). Knowledge and performance in action. Journal of Knowledge Management, 8(4), 38–53. http://doi.org/10.1108/13673270410548478

Schack, T. (2012). Measuring mental representations. In G. Tenenbaum, R. Eklund, & A. Kamata (Eds.), Measurement in Sport and Exercise Psychology (pp. 203–214). Champaign, IL: Human Kinetics. Retrieved from http://www.uni-bielefeld.de/sport/arbeitsbereiche/ab_ii/publications/pub_pdf_archive/Schack (2012) Mental representation Handb

Schack, T., Essig, K., Frank, C., & Koester, D. (2014). Mental representation and motor imagery training. Frontiers in Human Neuroscience, 8(May), 328. http://doi.org/10.3389/fnhum.2014.00328

Schack, T., & Mechsner, F. (2006). Representation of motor skills in human long-term memory. Neuroscience Letters, 391(3), 77–81. http://doi.org/10.1016/j.neulet.2005.10.009

Weigelt, M., Ahlmeyer, T., Lex, H., & Schack, T. (2011). The cognitive representation of a throwing technique in judo experts – Technological ways for individual skill diagnostics in high-performance sports. Psychology of Sport and Exercise, 12(3), 231–235. http://doi.org/http://dx.doi.org/10.1016/j.psychsport.2010.11.001

Misconceptions about Certification

There seems to be wide misconceptions about what “Certification” and “Licensure” are all about.  Some see it as just the final part of an educational regimen.  Other’s see it as some kind of hurdle imposed by greedy organizations restricting access to some benefit.  These misconceptions and skewed perspectives, lead them to make demands affecting the very heart of what certification is all about and minimizing the value of the very certification they are working to achieve. First:

What Certification IS …

Certification is a legally defined classification stating a certifying body stands behind the capability of a certified individual to perform at some specific level of capability.  Certification is a very simple construct: it is the definition of a standard of performance, and a program assessing individuals to that standard.  That’s it, nothing more and nothing less.

Setting a standard is the first part of any certification.  Ideally, this standard is defined with help of people who perform the job.  In addition, a standard implies that everyone, no matter whether they have done the actual job for decades or participated in the development of the standard itself, must objectively demonstrate their capability in exactly the same manner.  Unless this standard is objectively applied equally to all individuals, it is not a standard.  Maintaining that standard is the foundation of certification value; it is what the certification stands for and how it should be used.

Creating an assessment of that standard is the second part of any certification.  This not only includes a mechanism assessing current capability, but also a program or process to ensure future capability.   Despite what many believe, creating an assessment is not a simple, ad hoc process where someone creates an assessment (test) and makes people take it to prove their ability.  It is, in fact, a highly rigours process backed by decades of scientific research on measuring cognitive ability (psychometrics), the sole purpose of which is ensuring the validity of the decisions made based on assessment performance.  It involves designing the assessment, job-task analysis of the job being assessed, formalized standards of item (question) writing, public beta-testing of items, psychometric evaluation of the item performance, assessment construction, and evaluation of appropriate scoring.  Done properly, it is time-consuming and costly (which is why some organizations don’t do it properly).

Finally, because knowledge and competence are perishable commodities, mechanisms must be put in place ensuring certified individuals remain capable in the future.  This is frequently done by limiting the length of time a certification is valid and requiring periodic re-certification (re-validation) of the individuals capability.  Other methods may include proving continued education and practice of the knowledge.  Regardless, the ongoing evaluation of certified individuals must adhere to the original standard with the same validity, or the standard no longer has value.

There is no hidden agenda to certification.  There is no conspiracy.  It is simply to establish a standard and assess individuals compared to that standard.

Misconceptions about Certification

Really, any belief beyond the design of a standard and the assessment to that standard, is a misconception about certification.  However, there are a number of misconceptions that often drive changes detrimental to the rationale of certification.  The most common ones relate to understanding what an assessment is for, training, and re-certification requirements.

Most people mistakenly assume the items within an assessment must represent the end-all-be-all of what someone should know.  They don’t understand that a psychometric assessment is not about the answers themselves, but the inferences we can make about performance based on those answers.  There is no way to develop an exhaustive exam of all the knowledge necessary to be competent and to deliver that exam efficiently.  However, we can survey a person’s knowledge and through statistical analysis infer whether they have all the knowledge necessary or not.  The answer to any specific question is less important than how answering that question correlates with competence.  Even a poorly written, incomplete, and inaccurate item can give us information about real world performance; in fact, evaluating how a candidate responds to such an item can be highly informative (although this is not a standard, intentional practice).  This focus on the items themselves, rather than the knowledge and competence the answers suggest, is what makes people incorrectly question the validity of the certification.

Similarly, many people think a certification should be able to be specifically taught.  As such, they believe a training course should be all that is necessary to achieve a certification.  However, this does not align with what we know about the development of human competence.  There is a big difference between knowing something, and knowing how to apply that knowledge competently.  Certification is an assessment of performance, not knowledge; and, as such, cannot be taught directly.  If someone can take a class and immediately achieve certification, either: A) the assessment does not evaluate actual performance; or, B) the course simply teaches the answers to the questions on the exam, rather than the full domain of knowledge.  In either case, you have biased the inferences made by the assessment.  Competence begins with knowledge, but must also have experience and practice.  This cannot be gained through a class, but only through concerted effort; you cannot buy competence.

Finally, many people also believe that once a certification has been achieved, it shouldn’t need to ever be evaluated again; or, that taking a course instead of an assessment should suffice.  The former belief simply ignores the fact performance capability is a perishable commodity: if you don’t use it, you lose it.  The latter once again confuses knowledge with performance.  How frequently this needs to happen, or whether continued education is sufficient to demonstrate continued performance is entirely dependent on the knowledge domain the certification attempts to assess.  In highly dynamic environments, this may need to be done much more frequently and rely more on assessments than in other domains; however, ongoing evidence of continued capability is a must if standards are to be maintained.

Leave Certification to the Professionals

The heart of the problem is that everyone seems to believe they are experts in the design of certification programs and assessments simply because they have participated in them.  The reality is that certification is a rigorous, research-based, and scientific endeavor.  The minimum requirement to be considered a psychometrician is a PhD; that’s a great deal of specialized knowledge most people do not have.  The decisions made are not arbitrary, nor are they made with the intention of anything other than maintaining the standard and making valid assessments of individuals according to that standard.

At the end of the day, the value of a certification is whether the people who achieve it can perform according to the standard the certification set forth.  If the certification cannot guarantee that, then it is not valid and has no value.  However, this requires people to actually understand what that standard is, what it means, and why it was created.   It requires people to accept there is a rigorous process accounting for all of the decisions and those decisions all support validity.  Finally, it requires people to understand that just because they may be experts in their field, they are not experts in certification.

 

 

 

 

 

What Makes an Expert, an Expert?

Re-post from LinkedIn April 28, 2016

Human beings have likely been trying to understand expertise since the first cave dweller wondered why Grog was so much better at hunting, or why Norg seemed to always know where the best berries were.   Efforts to identify, and more precisely to predict expertise have pretty much been ongoing ever since. It’s no wonder, since a McKinsey report showed that high-performers could generate significantly greater productivity (40%), profit (49%) and revenue (67%) depending on their role when compared to even average performers (Cornet, Rowland, Axelrod, Handfield-Jones, & Welsh, 2001). While we are still not very good at predicting future expertise, or even how to objectively quantify it, we have learned a few things along the way. Expertise is not necessarily an innate ability. Nor is expertise necessarily what you know but how you know it.

Scientific assessment of individual differences seems to have hit critical mass in the mid- to late- 19th century, culminating in the development of the general theory of intelligence (Spearman, 1904). Spearman was attempting to create a unified way of looking at and evaluating innate capability, sans training or experience. This idea that certain human beings were simply destined for greatness was the impetus for the intelligence testing that we still use today for assessing potential (e.g. IQ). While many people (including businesses) put a lot of stock into general measures of intelligence, it turns out that actual, real world performance is not simply a matter of innate ability. For instance, IQ measures proved to be useless in predicting the rankings of internationally ranked chess players. In fact, studies have shown intelligence measures to only account for between 4% and 30% of real world performance (Sternberg, Grigorenko, & Bundy, 2001). Even at the high-end of that range, more than two-thirds of the reason for an individual’s real world performance is unaccounted for by standard intelligence measures. Real world performance is more than innate ability, but the product of ability informed by experiential knowledge and skills.

The mid- 20th century ushered in the idea that, perhaps, expert performance was the result of specialized knowledge developed over time. Michael Polanyi famously defined tacit knowledge by suggesting we know more than we can tell (Peck, 2006; Polanyi, 1966). As opposed to explicit knowledge, which can be written down, easily expressed and taught, tacit knowledge remains elusive even to those who have it (Mahroeian & Forozia, 2012). While explicit knowledge is what we know, tacit knowledge is the ability to apply that knowledge successfully; experts exhibit some form of meta-knowledge enabling them to better apply their knowledge. Experts achieve automaticity in both their thoughts and actions, making complex processes appear effortless and simple. Yet, experts are generally unable to explain how they do this. The result is that experts appear to solve problems intuitively, not because they specifically know more, but because they know better.

One explanation of where tacit knowledge originates is through the development of superior mental models of domain knowledge. Research comparing the mental models of expert and novice practitioners show that experts organize their knowledge in ways uniquely different from novices (Chi, Glaser, & Rees, 1982; Gogus, 2013). This research substantiates that a principal difference between an expert and a novice is the structure of their mental models, not necessarily the contents of their knowledge. The mental models of expert practitioners appear to coalesce to a point of maximum efficiency regardless of how the skills develop (Schack, 2004). These efficient mental models allow experts immediate access to (more) knowledge and procedures relevant for efficient use in daily application (Feltovich, Prietula, & Ericsson, 2006). In short, experts generate the best solutions under time constraints, better perceive the relevant characteristics of problems, are more likely to apply appropriate problem solving strategies, are better at self-monitoring to detect mistakes and judgment errors, and perform with greater automaticity and minimal cognitive effort (Chi, 2006). Experts perform faster and more accurately with less effort.

A recent study comparing more-experienced and less-experienced soccer players utilized iris-scanning technology to make this point exceptionally salient (Lex, Essig, Knoblauch, & Schack, 2015). This study determined that while more-experienced and less-experienced players fixated on visuals of game situations for the same amount of time per pixel, more-experienced players focused on four specific aspects of the visual while less-experienced players fixated on many areas irrelevant to the decision-making process; the result was that more-experienced players made effective decisions much faster than their less-experienced counterparts. The point here is experts are capable of screening out extraneous information and focusing solely on the details that matter in order to make effective, efficient, and accurate choices. While all of the players had the same basic knowledge of the game, more-experienced players applied that knowledge more efficiently to make accurate decisions more quickly.

So, what makes an expert, an expert? Much like the number of licks to reach the center of a tootsie-pop, the world may never really know. Despite apocryphal notions, we don’t know how long it takes for someone to become an expert, or even if all individuals are capable of becoming experts. We don’t even have a universal means of determining if someone has truly become an expert or easily differentiating experts from novices objectively. What we do know is that expertise is not something you are born with and it is not something achieved simply by obtaining knowledge or training.  It is a metamorphosis from knowing what, to knowing how.

One might say that expertise is simply a state of mind.

References

Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7–75). Hillsdale: Lawrence Erlbaum Associates.

Cornet, A., Rowland, P. J., Axelrod, E. L., Handfield-Jones, H., & Welsh, T. A. (2001). War for talent, part two. McKinsey Quarterly, (2), 9–12. Retrieved from http://www.mckinsey.com/

Gogus, A. (2013). Evaluating mental models in mathematics: A comparison of methods. Educational Technology Research and Development, 61(2), 171–195. doi:10.1007/s11423-012-9281-2

Mahroeian, H., & Forozia, A. (2012). Challenges in managing tacit knowledge: A study on difficulties in diffusion of tacit knowledge in organizations. International Journal of Business and Social Science, 3(19), 303–308. Retrieved from http://ijbssnet.com/

Peck, D. A. (2006). Tacit knowledge and practical action: Polanyi, Hacking, Heidegger and the tacit dimension. ProQuest Dissertations and Theses. University of Guelph (Canada), Ann Arbor. Retrieved from http://search.proquest.com.library.capella.edu/docview/305337938?accountid=27965

Polanyi, M. (1966). The Tacit Dimension. Knowledge in Organizations. Butterworth-Heinemann. doi:10.1016/B978-0-7506-9718-7.50010-X

Spearman, C. (1904). “General intelligence,” objectively determined and measured. The American Journal of Psychology, 15(2), 201–292. doi:10.2307/1412107

Sternberg, R. J., Grigorenko, E. L., & Bundy, D. a. (2001). The predictive value of IQ. Merrill – Palmer Quarterly, 47(1), 1. doi:10.1353/mpq.2001.0005

Why you want to, but won’t, hire a Versatilist

The quality of an organization’s human capital is more important today than at any time before.  Global, dynamic markets eradicate the competitive advantages of capital, equipment, and land (Drucker, 1992; Friedman, 2006; Hayton, 2005; Teece, 2011).  Today, differentiation comes from combining undifferentiated inputs and resources in unique ways (Dutta, 2012; Reeves & Deimler, 2011; Teece, 2007, 2011, 2012; Teece, Pisano, & Shuen, 1997). As such, the source of competitive differentiation and strategic value is not having superior resources, but the skill and knowledge necessary to innovate.  One way to describe this organizational ability is dynamic capabilities (Teece, 2012). Dynamic capabilities characterize the organizational ability to sense and seize new opportunities and transform the organization, maintaining a competitive position.  Organizations with strong dynamic capabilities change and adapt to dynamic markets, are strong innovators, and build lasting strategic differentiation.  The only place this knowledge and skill resides is within the individuals working for the organization: human capital (Blair, 2002; Ployhart, Nyberg, Reilly, & Maltarich, 2014).

If we take the notion of dynamic capabilities and apply it to a person, instead of an organization, you get versatilists.  Versatilists are wired to sense and seize new opportunities, leverage new skills and abilities, and innovate who and what they are.  They are always changing and adapting to the world around them to become experts in new areas.  They don’t have access to different knowledge or methods of learning than other people, but they combine them in new ways to create new versions of themselves.  If organizations need dynamic capabilities to innovate and be successful, who better than versatilists to drive that effort.  This is why organizations should identify and recruit versatilists as employees.

Unfortunately, current recruiting and hiring strategies are ill aligned to this goal. Just look at your average Sr. level job description: 5 -7 years doing one thing with 10+ years in the same industry, with the same focus; another: 10 years in this job role, plus 5 years in specific industry. The job descriptions go on to list several dozen areas of knowledge and experience necessary to be considered a good fit.  These descriptions will use terms like “successful track record of”, “expertise in”, and “demonstrated experience with”. While this likely doesn’t sound out of place to many, especially those in HR and recruiting, it puts the job in a nice, little box tied with a bow.  The versatilists will rarely look twice for a couple of reasons.

First off, after 5-7 years doing the same thing, most versatilists are ready for the next challenge, not the next opportunity to do the same thing. The industry experience is less of an issue (although it’s still a bad way to get new ideas into your organization).  Versatilists don’t just adapt and change because of external forces; we’re not forced to go down a different path. We choose to do new things in new ways. There is an internal drive to know more, to do more, and to do it better.  Once a versatilists has become an expert in a role, we see little opportunity for growth, either personal or professional, and are naturally attracted to the next opportunity.

Second, unlike a generalist who tends to oversell their experience, versatilists, having become experts, generally undersell.  This is the Dunning-Kruger Effect in action (Dunning, Johnson, Ehrlinger, & Kruger, 2003; Kruger & Dunning, 1999).  According to this research, people tend to estimate their knowledge on any topic as at, or slightly above average.  Those with the least amount of actual knowledge overestimate grossly what they know (and don’t know they are doing it).  However, this works with experts as well, who underestimate their knowledge by assuming it is also just at, or slightly above average (this is sometimes referred to as imposter syndrome).  Because versatilists become experts in each of their chosen areas, even if you ask for “expertise” in that specific area, they will not feel qualified generally. This is further compounded when the job description suggests the candidate should be competent in dozens of areas.

Consequently, organizations limit their ability to hire versatilists the minute they draft a job description, making themselves unattractive to the very human capital they should really want.  Organizations cannot become innovative or develop dynamic capabilities, and yet hire based on check boxes and job descriptions of what the job has always been.  Instead, organizations should be hiring the people that can adapt and change the job to what it needs to be tomorrow.  Unless you change the way you recruit and hire, you’re more likely to hire someone without the skills you thought you needed and no capacity to develop the skills you really need.

 

References

Blair, D. C. (2002). Knowledge Management: Hype, Hope, or Help? Journal of the American Society for Information Science & Technology, 53(12), 1019–1028.

Drucker, P. F. (1992). The post-capitalist world. Public Interest, 109(Fall 1992), 89–101. Retrieved from http://www.nationalaffairs.com/

Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science, 12(3), 83–87. http://doi.org/10.1111/1467-8721.01235

Dutta, S. K. (2012). Dynamic capabilities: Fostering ambidexterity. SCMS Journal of Indian Management, 9(2), 81–91. Retrieved from http://search.proquest.com/

Friedman, T. L. (2006). The world is flat: A brief history of the twenty-first century. New York, NY: Farrar, Straus and Giroux.

Hayton, J. C. (2005). Competing in the new economy: the effect of intellectual capital on corporate entrepreneurship in high-technology new ventures. R&D Management, 35(2), 137–155. http://doi.org/10.1111/j.1467-9310.2005.00379.x

Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It : How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated. Journal of Personnality and Social Psychology, 77(6), 1121–1134. http://doi.org/10.1037/0022-3514.77.6.1121

Ployhart, R. E., Nyberg, A. J., Reilly, G., & Maltarich, M. a. (2014). Human capital Is dead; Long live human capital resources! Journal of Management, 40(2), 371–398. http://doi.org/10.1177/0149206313512152

Reeves, M., & Deimler, M. (2011). Adaptability: The new competitive advantage. Harvard Business Review, 89(7/8), 134–141. Retrieved from http://hbr.org/

Teece, D. J. (2007). Explicating dynamic capabilities: the nature and microfoundations of (sustainable) enterprise performance. Strategic Management Journal, 28(13), 1319–1350. http://doi.org/10.1002/smj.640

Teece, D. J. (2011). Dynamic capabilities: A guide for managers. Ivey Business Journal Online, 1. Retrieved from http://search.proquest.com/

Teece, D. J. (2012). Dynamic Capabilities: Routines versus entrepreneurial action. Journal of Management Studies, 49(8), 1395–1401. Retrieved from 10.1111/j.1467-6486.2012.01080.x

Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18(7), 509–533. http://doi.org/10.1016/b978-0-7506-7088-3.50009-7

Three Reasons Certification is Better than College.

Re-post from LinkedIn – April 8, 2016

Much of the national debate around the value of a college education seems to revolve around the cost of college, particularly in finding ways of making college more affordable and accessible to more people. This frame of reference assumes that a college education is the only means of post-secondary training and education that has value; this is simply not true. According to an analysis of employment during the Great Recession, 4 out of every 5 jobs lost were held by those without any formal education beyond high school; those without post-secondary training were more than three times as likely to lose there jobs than those with even “some college” (Carnevale, Jayasundera, & Cheah, 2012). This, by itself, suggests that even minimal post-secondary training can garner significant benefit; having a job is preferable to not having one. One way to accomplish this is through industry-based certifications (IBCs).

IBCs offer a number of advantages for improving an individual’s employability over simply making college more affordable and accessible. First, in regards to affordability and accessibility, IBCs offer greater return on investment (ROI) than traditional college education. In part because of their far lower time and money commitment, IBCs also provide a more flexible solution either to replace college, or to aid in preparation for later college education. Finally, the dynamic nature and industry relevancy of IBCs provide stronger signals to employers concerning an individual’s ability to actually do the job they need today, rather than months or years down the road. The strong case for IBCs begins with simple economics.

Direct ROI

Given the focus on ROI of post-secondary education, a comparison of the ROI of Certification Earnings Premiumcertification versus a Bachelor’s degree seems relevant. According to the U.S. Census Bureau (Ewert & Kominski, 2014), with the exception of a Master’s degree, there is an earnings premium for achieving certification or licensure regardless of education level. This premium predominantly benefits those with less post-secondary educational investment (see Figure 1). While it is true the earnings premium for having a Bachelor’s degree is much greater (see Figure 2), this is not a measure of return on investment. Return on investment is a measure of what you get for what you put in; i.e. ROI is the amount you can expect to get back for every dollar spent. This is where the ROI of certification is substantially better.

Assuming a cost of $40,000 for a Bachelor’s degree (probably a low estimate) and a cost Education Earnings Premiumof $5,000 to achieve certification (probably a high estimate), the ROI of achieving certification for someone with only a high school education is 2.3 times that of achieving a Bachelor’s degree (see Figure 3). Furthermore, this is just a starting point as it doesn’t account for differences in earning while those achieving a Bachelor’s degree remain in school or the cost of interest on student loans for college tuition. The fact is, certification provides individuals an extremely efficient mechanism to improve their earnings potential, and achieve the post-secondary credentials that improve their ability to get and keep a ROI of Certificationjob, even during tough economic times. The fact that IBCs add value to both those without other post-secondary education as well as those with, also demonstrates the greater flexibility of certifications.

Flexibility

One of the challenges to simply making a college education more affordable (or free), is that cost is not the sole factor contributing to non-participation or non-completion. An analysis of college completion statistics showed a 7% difference in achieving a Bachelor’s degree between students who complete high school with a 3.0 GPA versus those with a 3.5 GPA (Rose, 2013). Rose also reported that family and work responsibilities significantly affect the chances of completing a degree program. In other words, while the cost of a college education might inhibit individuals from starting a degree program, individual preparedness and the time commitment necessary to complete a degree are significant contributors to whether individuals ever actually graduate and garner the benefits. This is likely to be particularly true for low-income or disadvantaged students. IBCs provide more flexibility to address these challenges.

Firstly, IBCs have significantly lower time commitments associated with their completion, making it that much easier for students who must also maintain family and work obligations to complete the requirements for certification. Many IBCs do not even require formalized classes or specific training, allowing individuals to self-study as they are capable or as life permits. Finally, most IBCs award credentials, not based on having completed a regimented program of study, but upon the passing of competency-based exams. This means that students can take as much time, and as many attempts, as necessary without suffering negative consequences; it is not a one-time deal, thus providing greater chances of ultimate success. This applies to both those without any other post-secondary training as well as those with degrees who are simply looking for additional earnings potential.

Secondly, IBCs may provide students not ready for a formal degree program with the knowledge and skill to prepare them for a future degree. IBCs can provide students with exposure to a field of study without the time and financial commitment associated with a formal degree program, reducing the costs associated with choosing a career they ultimate find unsatisfactory or unfulfilling. In addition, this additional knowledge and skill may give students the confidence and ability necessary to complete degrees they would otherwise have been unprepared for.

Not everyone has the time, or capability to commit to formal degree programs. This has a much larger effect on educational outcomes than the cost; simply reducing the cost or providing universal access does not address either of these challenges. IBCs fill a gap between the demands of formalized postsecondary training, and the real world needs of students just trying to stay a head in a highly competitive marketplace while simultaneously making ends meet (Claman, 2012). In addition, IBCs are increasingly more valuable to employers.

Stronger Employability Signals

“The value of paper degrees lies in a common agreement to accept them as a proxy for competence and status, and that agreement is less rock solid than the higher education establishment would like to believe” (Staton, 2014, para. 3).

Despite the nearly $800 billion dollars spent each year in the United States for human capital development beyond primary and secondary education, nearly 70% takes place outside of four-year colleges and universities; of that, U.S. employers spent almost $460 billion on formal and informal employ training alone (Carnevale, Jayasundera, & Hanson, 2012). According to the Economist, only 39% of hiring managers feel college graduates are ready to be productive members of the workforce (“Higher education: Is college worth it?,” 2014).  The Economist further points out the skill gap between college degreed applicants and the needs of employers has left 4 million jobs unfilled. It is no wonder employers are beginning to question whether degrees are appropriate proxies for real world competence; and, some are even seeing advance degrees as a negative hiring signal requiring more cost with little benefit (Staton, 2015).

“The world no longer cares about what you know; they world only cares about what you can do with what you know” (Tony Wagner as quoted by Friedman, 2012, para. 11).

The hands-on, competency-based aspects of IBCs not only create value for individuals directly, but indirectly by providing stronger signals to employers about the actual competence of job candidates. The dynamic and flexible nature of IBCs make them a better reflection of current industry standards and competence even in rapidly changing industries (Carnevale, Jayasundera, & Hanson, 2012). Perhaps even more important, the standards and competency-based testing utilized in IBCs improves the ability to objectively compare applicants, something that has proven extremely unreliable for post-secondary metrics like GPA (Carnevale, Jayasundera, & Hanson, 2012; Swift, Moore, Sharek, & Gino, 2013). IBCs provide employers with highly credible evidence of applicant’s ability to actually do something with their knowledge, not just their ability to know something.

IBCs are increasingly embraced by employers as a more reliable and valid indicator of candidate competence and questioning the value of traditional post-secondary indicators (Carnevale & Hanson, 2015). Because IBCs are, by definition, industry-based, applicants holding IBCs are more likely to have relevant, up-to-date skills meeting national, or international standards. IBCs are not only easier to evaluate, but also provide strong indicators that a prospective applicant will not need additional employer-based training before becoming productive. This is likely why even holders of advanced professional degrees are paid premiums for also having IBCs (Figure 1).

Conclusion

The debate about the current state of education in the United States is a worthwhile discussion, perhaps even a critical discussion in light of the challenges facing us. The problem is the single means of post-secondary education (four-year degrees) that dominates the debate and a singular focus on the cost of educating to this level. This debate fails to account for the many other factors affecting student outcomes, and the actual needs of employers. The reality is that advanced economies are not dominated by high-volume, low-value production, but low-volume, high-value production (Friedman, 2012), and the demand for “middle-education” jobs is growing and will continue to grow for many years (Carnevale, Jayasundera, & Hanson, 2012).   Without addressing these realities, we are only perpetuating a divide between those with degrees and those without, while still failing to meet the needs of business. There will always be a need for formal degrees, but that does not make them the panacea for all people and for all jobs.

At the end of the day, credentialing is an attractive option for anyone looking to improve their employment options.  IBCs provide a greater ROI, in a shorter amount of time than formal degrees.  The flexibility and less structured design of IBCs  make them easier to obtain successfully, especially for students either unprepared for, or unable to commit to formal programs.  Furthermore, IBCs provide strong employment signals to potential employers about the individuals ability to contribute on day-one of employment.  In many cases, the ROI, the flexibility, and the strong employment signals attributed to IBCs may very well be a better option than college; in other cases, IBCs may be an essential stepping-stone to that first degree by providing the skills, and the additional income, necessary to commit to obtaining a formal degree.  AND, if you already have a bachelor’s degree, these same benefits await you compared to getting a graduate degree.  Certification may very well be better than college to many.

NOTE: Anyone interested in exploring how competency-based credentialing is a critical component of the future of higher education should investigate WorkCred (http://www.workcred.org/), a non-profit organization working to elevate the visibility of credentialing as an essential ingredient in the future of human capital development in the 21st century. The author is not affiliated with WordCred.

References

Carnevale, A. P., & Hanson, A. R. (2015). Learn & earn: Career pathways for youth in the 21st century. E-Journal of International and Comparative Labour Studies, 4(1). Retrieved from https://cew.georgetown.edu

Carnevale, A. P., Jayasundera, T., & Cheah, B. (2012). The college advantage: Weathering the economic storm. Retrieved from https://cew.georgetown.edu/

Carnevale, A. P., Jayasundera, T., & Hanson, A. R. (2012). Career and Technical Education: Five Ways that Pay. Retrieved from https://cew.georgetown.edu/

Claman, P. (2012). The skills gap that’s slowing down your career. Harvard Business Review. Retrieved from http://hbr.org

Ewert, S., & Kominski, R. (2014). Measuring Alternative Educational Credentials: 2012, (January), 14. Retrieved from https://www.census.gov/

Friedman, T. L. (2012, November 17). If You’ve Got the Skills, She’s Got the Job. The New York Times. New York, NY. Retrieved from http://www.nytimes.com/

Higher education: Is college worth it? (2014, April). The Economist. doi:Article

Rose, S. J. (2013). The Value of a college degree. Retrieved from http://cew.georgetown.edu/

Staton, M. (2014). The degree is doomed. Harvard Business Review. Retrieved from https://hbr.org/

Staton, M. (2015). When a fancy degree scares employers away. Harvard Business Review. Retrieved from http://hbr.org/

Swift, S. A., Moore, D. A., Sharek, Z. S., & Gino, F. (2013). Inflated applicants: Attribution errors in performance evaluation by professionals. PLoS One, 8(7). doi:http://dx.doi.org/10.1371/journal.pone.0069258