Assessment

Standardized Testing in Context of Diversity, Equity, and Inclusion: We need more, not less.

There was recently an article in the New York Times concerning the ongoing debate over standardized testing, specifically about the use of SAT and ACT testing in the college admissions process. The use of these tests has been debated for years, but during the pandemic, when in-person testing became impossible, many educational systems decided to remove this requirement and have simply not reinstated them. 

The point of the article was that, despite many concerns the exams themselves were biased in any number of ways, the use of standardized testing scores in institutions requiring them has actually increased the diversity of the student population (across all factors, race/sex/socioeconomic/etc.) over virtually any other means of admission standard.  In addition, the article points out that what many people see as bias in the tests themselves is likely misplaced: the tests accurately predict what they are intended to predict regardless of race and economics, namely, will the student do well in college or not. 

Herein lies, perhaps, one of the most misunderstood aspects of standardized testing … they can only reliably predict what they are intended to predict and nothing else. As a practicing academic who spends much of his days working on standardized testing programs in the technology industry, I am constantly confronted with these misconceptions. 

What are Standardized Tests?

The first thing to understand is what, exactly, standardized testing actually is.  In short, standardized tests are specifically built to predict some aspect of the individual taking the assessment. In the case of the ACT and SAT exams, they are designed to predict how well the individual will do in the university setting, and nothing more. In addition, by “predict”, I mean that they make a statistical inference, not an absolute determination, as they are based on statistical science which describes a “group”, not any one individual. They do not specifically measure real world capability. They do not measure overall intelligence. They only measure/predict what they are designed to do. 

Two key aspects of this are “Validity” and “Reliability”. Validity is a measure of how well an assessment does what it says it does. Does a high score on the exam actually predict what was intended, or more succinctly “are we measuring what we said we were”.  Reliability is a measure of whether the same individual, taking the same assessment, consistently scores the same without any other changes (like preparation, training, etc.); i.e., does the test make the same prediction every time it is used without any other factors affecting the results. 

Despite what critics say, the SAT and ACT exams have both been proven to be valid predictors of what they measure with high reliability.  My score will accurately (withing statistical deviations) predict my ability to be successful in college and my score will be fairly consistent across multiple attempts unless I do something to change my innate ability.  As the NYT article points out, this remains true: the higher you score on these exams, the better your academic results in post-secondary institutions.  The fact there is a significant discrepancy in scores based on race, socioeconomic situation, or any other factor is, frankly, irrelevant to the validity and reliability of the exam. Using the results of these exams in any context other than how they were designed is an invalid use.

The Legacy of Mistrust

These basic misunderstandings of standardized testing breeds mistrust and suspicion in what they do and how they are used. This is nothing new and likely stems for the development and use of assessments from the past. The original intelligence quotient (IQ) test developed around the turn of the 20th century is subject to the same issues, including suggestions of racial and socioeconomic bias. In part this is because the IQ test is not actually a valid predictor of intelligence or the ability to perform successfully, but like the SAT and ACT exams, research has showed it is a predictor of success in primary and secondary educational environments.  Unfortunately, this was not fully understood when the assessment was born and IQ has been misused in ways that actually have contributed to societal bias.  This is the legacy that still follows standardized testing.

It is bad design, the misuse of standardized testing results, and the misinterpretation of those results that causes such spirited debate.  In the case of the original IQ test, it was originally purported to determine innate intelligence, but was actually a predictor of primary/secondary educational success. Furthermore, research suggests that IQ is a poor predictor of virtually anything else, including an individual’s ability to succeed in life. This is a validity issue; meaning that it did not measure what it purported to measure. Due to the validity issue, IQ testing was then misused to further propagate racial and socioeconomic inequity, by suggesting that different races, or different classes were just “less intelligent” than others, prompting stereotypes and prejudice that simply wasn’t founded. 

Given this legacy, it is easy to understand why many mistrust standardized tests and believe they are the problem, rather than a symptom of a larger problem.

The Real Issue is NOT Standardized Testing

The conversation around standardized testing has suggested the reason for racial and socioeconomic disparity is due to bias within the testing. However, if we can accept standardized tests (at least ones that are well designed to have validity and reliability) simple make a prediction, and that the SAT and ACT, in particular, make accurate predictions of a student’s ability to succeed in post-secondary education, the real question is why is there a significant disparity in results based on race and socioeconomic background? Similarly, why did the original IQ testing accurately predict primary/secondary educational outcomes, but also suffer from the same disparity? The real question is: Why can’t students from diverse backgrounds equally succeed in our education system?

The answer is rather simple and voluminous SAT and ACT data clearly indicate this: there is racial and socioeconomic disparity built into the educational systems. This is a clear issue of systemic bias; your chances of success within the system are greatly affected by race and socioeconomic background. Either what we are teaching, or how we are evaluating performance, is not equitable to all students. This is the issue we should be having conversations about, research conducted, and action taken. Continuing the debate, or simply eliminating standardized testing, is not going to affect the bigger issue. If anything, eliminating SAT and ACT testing will help hide the issue because we will no longer have such clear, documented evidence of the disparity. I don’t want to start any conspiracy theories, but maybe this is one reason so few educational systems are willing to reinstate ACT and SAT testing as part of their admissions requirements, especially when the research suggests they are better criteria for improving diversity than other existing means. They may be imperfect, but it is not the assessment’s fault, it is the system’s fault.

How to Improve? 

First, I want to be clear: I don’t have any specific, research-based solutions. So, before I offer any suggestions based on my years of being in the educational system as a student, my years of raising children going through the educational system, and over a decade working with standardized test design and delivery, I want to emphasize that the best thing we can do to improve is simply to change the conversation away from the standardized tests and focus on the educational system itself. We need research to determine where the issue actually exists; is it what we teach, or how we measure performance?  That MUST be the first step.

That being said, when it comes to “how we measure performance”, based on my background, education, and experience, I’m going to make a radical suggestion: more standardized testing. I know, I know. Our students are already inundated with standardized testing, but hear me out. While standardized test are frequently used in our education system, they are rarely used to measure an individual student’s performance when it comes to grades (the ultimate indicator of success within the system), but as an assessment of the overall school’s performance. My suggestion is that these standardized tests may be a more equitable way to evaluate performance for the individual as well.

From an equity standpoint, while there are some proven correlations between individual test scores on the US National Assessment of Education Progress (NAEP) assessment and those individual’s ACT/SAT scores, the correlations were not perfect. In addition, correlations were weaker across racial/ethnic minorities and low-income students. NAEP scores have also shown positive correlation with post-secondary outcomes, although they were not the only factor. Finally, since the NAEP assessment began in 1990, the disparity in scores based on racial and socioeconomic differentiation has significantly diminished. This suggests the NAEP assessment may actually be better at determining the student’s capability, rather then just predicting their post-secondary success, while also having some ability to predict success. Yet, NAEP assessments are not used in any way to actually grade the student’s performance. At the very least, NAEP results may be a viable way to augment current admissions and similarly reduce the racial and socioeconomic disparity. They may also be a better way to measure “success” in the primary/secondary educational system than current methods, leading me to my next point.

The reality is that well-constructed, standardized assessments with proven validity and reliability are NOT how most of our students are evaluated today. Across the primary, secondary, post-secondary, and graduate levels, our students are routinely evaluated based on individual teacher developed assessments and/or subjective performance criteria. Those teachers are inadequately trained in how to design, make, and validate psychometrically sound assessments (with validity and reliability); and, as such, the instruments used to gauge student performance routinely do not meassure that performance. Without properly constructed assessments, our students are more likely to be measured on their English proficiency, cultural background, or simply whether they can decipher what the instructor was trying to say, rather than the knowledge they have about the topic. Subjective evaluations (like those used for essay responses) are routinely shown to be biased and rarely give credence to novel or innovative thought; even professional evaluators trained to remove bias, like those used in college admissions, routinely make systematic errors in evaluation.  Subjective assessments, in my personal and professional opinion, are fraught with inequity and bias that cannot be effectively eliminated. Furthermore, I can personally report that educational systems do not care, if the reaction to my numerous criticisms is any indication.  Standardized testing would address this issue and, as we’ve seen with the NAEP, likely do a much better job of making more equitable and fair performance assessments across students.

On top of that, our students’ performance is also often judged on things like home work, attendance, and work-products created in the process of learning, rather than on what they have learned or know. This misses the point, and likely exacerbates the disparity in “success” in our educational system. Single-parent and low-economic homes, which also tend to be more racial segmented, can have dramatic effects on these types of assessments. First, you are out-sourcing the learning to an environment you cannot control where some students may gain experiential knowledge growth, but others cannot; second, you compound that by further penalizing those who cannot with poor grades. While some students/parents (regardless of situation) may still engage in learning/experience outside the classroom, making it mandatory and grading on it likely contributes to the disparity giving those students in the best situations with an unfair advantage. Finally, from my own research into the development of expertise, I know that not all students require as much experiential learning to master the knowledge. The development of expert knowledge is idiosyncratic, some require more while some require less. As such, we should not be measuring performance on how the knowledge is obtained, and focus more on whether they have it or not. 

I know the legacy of mistrust will make this a hard stance for people to support, but the use of standardized testing for assessing student performance would address a number of significant issues in current practices. It can be less biased, provide more consistent results across schools, and if used in place of subjective or other non-performance criteria, be a more accurate reflection of student capability.

Conclusion

Standardized testing, especially the behind-the-scenes work done to properly create them, is a mystery to most people. When you add historical misuse and abuse of standardized testing, it is easy to see why many demonize them and question the results. The reality though, is that well constructed assessments, used properly, can not only help us uncover issues in society, but also help us address those issues. The data on SAT/ACT scores, both their ability to predict academic performance, as well as the disparity in scores across racial and socioeconomic background are a clear signal to the real problem: the racial and socioeconomic bias built into the education system. The education systems definition of “success”, or how it is determined is clearly biased. As such, we should not push to eliminate standardized testing, but look to see how we can improve our definition and measurement of success by doubling down on standardized testing instead of how we do it today.

Google Does Not Obviate “Knowing”

There is a strange notion making the rounds of social media in various forms, used to argue against traditional learning and assessment standards.  This reoccurring theme suggests the ubiquitous ability to leverage Google search, Wikipedia, or other online resources to find answers obviates the need to learn anything for yourself.  I.e., if we need to know something, we can just look it up in real-time and don’t need to waste time learning this information before we need it.  This theme has come up in discussions of our educational system curriculum, the supposed uselessness of standardized testing, and even in employee assessment criteria.

The Internet was never intended to be a replacement for independent knowledge.

Perhaps this is a special case of the Dunning-Kruger effect (Dunning, Johnson, Ehrlinger, & Kruger, 2003; Kruger & Dunning, 1999), but there are at least two clear reasons why access to knowledge is not equivalent to actually knowing it.  The first is a complete disconnect from the way human beings develop skill and competency.  The second is the assumption real-time knowledge, although ubiquitous, is accurate and will always be available.

Having Facts is Not “Knowing”

The most incongruous part of this idea is the assumption that knowledge is the result of just having a bunch of facts.  Thus, if you can just look up the facts, you have knowledge.  Unfortunately, unlike in the Matrix, human beings cannot simply download competence and expertise.

Learning something, and becoming good at it, is a process of building mental models on top of the foundation of rote facts

The study of experts and expert knowledge has well established the difference between experts and novices is not in what they know (the facts), but in how they apply those facts. It is based on how each fact fits with other facts or other pieces of knowledge. Expertise is the result of a process of integrating facts, context, and experience together and defining more refined and efficient mental models (Ericsson, 2006).  Learning something, and becoming good at it, is a process of building mental models on top of the foundation of rote facts.  This cannot be done without internalizing those facts.

In addition, returning to Dunning-Kruger, without building competence, individuals are incapable of discerning the veracity of individual facts.  Our ability to understand whether information is accurate, or of any substance, results from being able to rectify new information with our existing mental models and knowledge.  Those with less competence are the most unable to evaluate this information making them the most susceptible to not only accepting incorrect information as fact, but also of developing mental models incorrectly reflecting reality.

Limits of Ubiquitous Knowledge Access

Although those of use living in developed economies take ubiquitous access to knowledge for granted, this is not the case for all human beings, nor is it guaranteed to always exist.  It is estimated only about 50% of the world’s population is connected to the Internet, over two-thirds of which are in developed economies.  Even these figures bear further investigation, as those in developing countries with Internet access are far more likely to be connected by slower, less reliable means keeping their access from being truly ubiquitous.  Furthermore, while China contributes significantly to the world’s total Internet users, the Chinese government does not allow full, unrestricted access to the knowledge available via the Internet.  This leaves the number of people with true, ubiquitous access well below 50% of the population.

Even for those of us fortunate enough to have nearly ubiquitous access to an unrestricted Internet of knowledge, access is fragile.  Power outages as a result of simple failure, natural events, or even direct malice, can immediately render information inaccessible.  Emergency situations where survival might rely on knowledge also often exist outside the bounds of this seemingly ubiquitous access. Without a charge, or cellular connection, many find themselves ill-equipped to manage.

Dumbing Down our Society

The idea that access to knowledge is the same as having knowledge portends a loss of intellectual capital.  Whereas societies in the past have maintained control by limiting access to information, we are creating a future where control is maintained by delegitimizing and devaluing the accumulation of knowledge through full access to information.  We are positioning society to fail in the future because they will have not only become dependent on being spoon-fed information instead of actual learning, but will have also lost the ability to differentiate fact from fiction.

Not only is the idea that access to knowledge equates to having knowledge founded on shaky foundations lacking any kind of empirical basis, it undermines the actual development of knowledge

Although it would be nice to assume this is a dystopian view of the future, we are already seeing the effects of this process.  As social media becomes increasingly the way our society views the world around us, we can already see how ubiquitous access to information is affecting our perceptions of the world around us.  Without the ability to think critically, something only developed through the accumulation of knowledge and experience, in evaluating the real-time information we receive, our society is being manipulated into perspectives not of our own choosing, but the choosing of others.  We are losing the ability to process the information we receive and find ourselves increasingly caught in echo-chambers only presenting information supporting potentially incorrect world-views.

The Internet was never intended to be a replacement for independent knowledge.  It was developed to expand our ability to access information in the pursuit of developing knowledge and capability.  Not only is the idea that access to knowledge equates to having knowledge founded on shaky foundations lacking any kind of empirical basis, it undermines the actual development of knowledge.

 

 

Resources

Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science, 12(3), 83–87. http://doi.org/10.1111/1467-8721.01235

Ericsson, K. A. (2006). An introduction to Cambridge handbook of expertise and expert performance: Its development, organization, and content. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It : How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated. Journal of Personnality and Social Psychology, 77(6), 1121–1134. http://doi.org/10.1037/0022-3514.77.6.1121

 

Improving Multiple-Choice Assessments by Limiting Time

Standardized, multiple-choice assessments frequently come under fire because they test rote skills, rather than practical, real-world application.  Although this is a gross over-generalization failing to account for the cognitive-complexity the items (questions) are written to, standardized assessments are designed to evaluate what a person knows, not how well they can apply it.  If that were the end of the discussion, you could be forgiven in assuming standardized testing is poor at predicting real-world performance or differentiating between novices and more seasoned, experienced practitioners.  However, there is another component that, when added to standardized testing, can raise assessments to a higher level: time.  Time, or more precisely, control over the amount of time allowed to perform the exam, can be highly effective in differentiating between competence and non-competence.

The Science Bit

Research in the field of expertise and expert performance suggests experts not only have the capacity to know more, they also know in a way differently than non-experts; experts exhibit different mental models than novices (Feltovich, Prietula, & Ericsson, 2006).  Mental models represent how individuals organize and implement knowledge, instead of explicitly determining what that knowledge encompasses.  Novice practitioners start with mental models representing the most basic elements of the knowledge required within a domain, and their mental models gradually gain complexity and refinement as the novice gains practical experience applying those models in real world performance (Chase & Simon, 1973; Chi, Glaser, & Rees, 1982; Gogus, 2013; Insch, McIntyre, & Dawley, 2008; Schack, 2004).

While Chase and Simon (1973) first theorized that the way experts chunk and sequence information mediated their superior performance, Feltovich et al. (2006) suggested these changes facilitated experts processing more information faster and with less cognitive effort contributing to greater performance. Feltovich et al. (2006) noted this effect as one of the best-established characteristics of expertise and demonstrated in numerous knowledge domains including chess, bridge, electronics, physics problem solving, and medical applications.

For example, Chi et al. (1982) determined that the way novices and experts approach problem-solving in advanced physics was significantly different despite all subjects having the same actual knowledge necessary for the problem solution; novices focused on surface details while experts approached problems from a deeper, theoretical perspective.  Chi et al. also demonstrated the novice’s lack of experience and practical application contributed to errors in problem analysis requiring more time and effort to overcome. While the base knowledge of experts and novices may not differ significantly, experts appear to approach problem solving from a differentiated perspective allowing them more success in applying correct solutions the first time and recovering faster when initial solutions fail.

In that vein of thought, Gogus (2013) demonstrated that expert models were highly interconnected and complex in nature, representing how experience allowed experts the application of greater amounts of knowledge in problem solving.  The ability for applying existing knowledge with greater efficiency augments the difference in problem-solving strategy demonstrated by Chi et al. (1982).  Whereas novices apply problem-solving approaches linearly one at a time, experts evaluate multiple approaches simultaneously in determining the most appropriate course of action.

Achieving expertise is, therefore, not simply a matter of accumulating knowledge and skills, but a complex transformation of the way experts implement that knowledge and skill (Feltovich et al., 2006). This distinction provides clues into better implementing assessments to differentiate between expert and novice: the time it takes to complete an assessment.

Cool Real-World Example Using Football (Sorry. Soccer)

In an interesting twist on typical mental model assessment studies, Lex, Essig, Knoblauch, and Schack (2015) asked novice and experienced soccer players to quickly and accurately decide the best choice of tactics (either “a” or “b”) given a video image of a simulated game situation.  Lex et al. used eye-tracking systems to measure how the participants reviewed the image, as well as measuring their accuracy and response time.  As one would expect, the more experienced players were both more accurate in their responses, as well as quicker. Somewhat surprising was the reason experienced players performed faster.

While Lex et al. (2015) determined both sets of players fixated on individual pixels in the image for nearly the same amount of time, experienced players had less fixations and observed less pixels overall.   Less experienced players needed to review more of the image before deciding, and were still more likely to make incorrect decisions.  On the other hand, more experienced players, although not perfect, made more accurate decisions based on less information.  The difference in performance was not attributable to differences in basic understanding of tactics or playing soccer, but the ability of experienced players to make better decisions with less information and taking less time.

The Takeaway

Multiple-choice, standardized assessments are principally designed to differentiate what people know, with limited ability to differentiate how well they can apply that knowledge in the real world.  Yet, it is also well-established that competent performers have numerous advantages leading to better performance in less time.    If time constraints are actively and responsibly constructed as an integral component of these assessments, they may well achieve better predictive performance; they could do a much better job of evaluating not just what someone knows, but how well they can apply it.

 

References

Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In Visual Information Processing (pp. 215–281). New York, NY: Academic Press, Inc. http://doi.org/10.1016/B978-0-12-170150-5.50011-1

Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7–75). Hillsdale: Lawrence Erlbaum Associates.

Feltovich, P. J., Prietula, M. J., & Ericsson, K. A. (2006). Studies of expertise from psychological perspectives. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Gogus, A. (2013). Evaluating mental models in mathematics: A comparison of methods. Educational Technology Research and Development, 61(2), 171–195. http://doi.org/10.1007/s11423-012-9281-2

Insch, G. S., McIntyre, N., & Dawley, D. (2008). Tacit Knowledge: A Refinement and Empirical Test of the Academic Tacit Knowledge Scale. The Journal of Psychology, 142(6), 561–579. http://doi.org/10.3200/jrlp.142.6.561-580

Lex, H., Essig, K., Knoblauch, A., & Schack, T. (2015). Cognitive Representations and Cognitive Processing of Team-Specific Tactics in Soccer. PLoS ONE, 10(2), 1–19. http://doi.org/10.1371/journal.pone.0118219

Schack, T. (2004). Knowledge and performance in action. Journal of Knowledge Management, 8(4), 38–53. http://doi.org/10.1108/13673270410548478

Is HR Sabotaging Your Innovation Efforts?

Re-post from LinkedIn from May 10, 2016

In today’s fast-paced, global economy, the traditional means of differentiation (land, capital and equipment) are becoming less differentiated and available equally to businesses large and small, old and new (Drucker, 1992; Friedman, 2006; Teece, 1998). This leaves the organization’s only means of differentiation the ability to combine undifferentiated resources together in unique ways … i.e. innovation (Lawson & Samson, 2001; Teece, 2011, 2012). Regardless of the approach to innovation you subscribe to (there are dozens, see Bowonder, Dambal, Kumar, & Shirodkar, 2010), the depth, breadth, and diversity of an organization’s people are significant antecedents to innovation success (Crook, Todd, Combs, Woehr, & Ketchen, 2011; Kim & Ployhart, 2014).  As a result, HR is a critical part of your innovation efforts.

Pre-employment assessments have been the principal tool used by HR to ensure the organization only hires the best and brightest. The use of pre-employment assessments by large U.S. organizations has increased from 26% in 2001, to 57% by 2013; eight of the top ten private employers use pre-employment testing for at least some of their positions (Weber, 2015). Unfortunately, as previously mentioned many of the traditional assessment tools have proven to be less reliable than flipping a coin when it comes to predicting future real-world performance. This realization has led to the increased use of psychological value assessments.  Value assessments attempt to match the values of prospective candidates with the value profiles of existing, high performing employees, essentially creating a way to find people who share the same values and perspectives of existing employees.  The idea is to find people who think and perform like existing top-performers. While these assessments may not be any better at predicting future performance, Weber reported on organizations reducing 90-day attrition rages from 41% to 12% in the span of only 3 years of use. Given the significant costs of hiring and training, this reduction in short-term attrition can be a significant savings for the organization. As the ease of utilizing these assessments continues to increase, the costs to utilize them decreases, they become increasingly difficult for HR organizations to ignore.

Surely, value assessments have benefit to the organization, but that value is no longer in the attracting and hiring “the best and the brightest”, and become the value of “culture” or “fit”. Even the validation of one of the most popular value assessments, the Hartman Value Profile (HVP), shows almost no correlation to real-world performance even when performance is subjectively evaluated by other members of the organization (Weathington & Roberts, 2005). Furthermore, making hiring decisions based on how well individuals “fit” within the existing organization seems at odds with the need for diverse knowledge and perspectives for effective innovation. While no one is arguing cultural “fit” is not an important aspect to collaboration and productivity, it is potentially dangerous to be seduced by the perceived benefits of values-based pre-employment assessments.

Innovation starts, and ends, with people. Decades of research demonstrate that successful innovation requires, not just the best and brightest, but also diversity in the perspectives and knowledge of those people. If everyone has the same perspective, values, and beliefs, it will be increasingly difficult to create anything “new”. If we compound this by not hiring the best and brightest (because we are more concerned with fit), the effects could be devastating.

Be wary about letting your HR practices sabotage your innovation efforts before they even get started.

References

Bowonder, B., Dambal, A., Kumar, S., & Shirodkar, A. (2010). Innovation strategies for creating competitive advantage. Research Technology Management, 53(3), 19–32. Retrieved from http://www.iriweb.org/

Crook, T. R., Todd, S. Y., Combs, J. G., Woehr, D. J., & Ketchen, D. J. J. (2011). Does human capital matter? A meta-analysis of the relationship between human capital and firm performance. Journal of Applied Psychology, 96(3), 443–456. doi:10.1037/a0022147

Drucker, P. F. (1992). The post-capitalist world. Public Interest, 109(Fall 1992), 89–101. Retrieved from http://www.nationalaffairs.com/

Friedman, T. L. (2006). The world is flat: A brief history of the twenty-first century. New York, NY: Farrar, Straus and Giroux.

Kim, Y., & Ployhart, R. E. (2014). The effects of staffing and training on firm productivity and profit growth before, during, and after the Great Recession. The Journal of Applied Psychology, 99(3), 361–89. doi:10.1037/a0035408

Lawson, B., & Samson, D. (2001). Developing innovation capability in organisations: A dynamic capabilities approach. International Journal of Innovation Management, 5(3), 377. doi:10.1142/s1363919601000427

Teece, D. J. (1998). Capturing value from knowledge assets: The new economy, markets for know-how, and intangible assets. California Management Review, 40(3), 55–79. doi:10.2307/41165943

Teece, D. J. (2011). Dynamic capabilities: A guide for managers. Ivey Business Journal Online, 1. Retrieved from http://search.proquest.com/

Teece, D. J. (2012). Dynamic Capabilities: Routines versus entrepreneurial action. Journal of Management Studies, 49(8), 1395–1401. Retrieved from 10.1111/j.1467-6486.2012.01080.x

Weathington, B. L., & Roberts, D. P. (2005). Validation analysis of the Hartman Value Profile. Retrieved from http://www.hartmaninstitute.org/

Weber, L. (2015). Today’s personality tests raise the bar for job seekers. The Wall Street Journal. Retrieved from http://www.wsj.com

Using Mental Models to Identify Expertise

Research in the field of expertise and expert performance suggest experts not only have the capacity to know more, they also know it differently than non-experts; experts employ different mental models than novices (Feltovich, Prietula, & Ericsson, 2006). While it remains unclear how antecedents directly affect the generation of mental models, the relationship between mental models and performance is demonstrated across multiple domains of research (Chi, Glaser, & Rees, 1982; Feltovich et al., 2006). Unlike attempts to directly elicit the antecedents of performance that may, or may not, contribute to future performance, the mental models of experts show stable and reliable differences in expert performance without requiring the artificial constructs of tacit knowledge measurements (Frank, Land, & Schack, 2013; Land, Frank, & Schack, 2014; Lex, Essig, Knoblauch, & Schack, 2015; Schack, 2004, 2012; Schack & Mechsner, 2006). The potential to accurately, easily, and quantifiably define job-related expertise is an organizational opportunity for both the accumulation as well as the management of talent.

What are Mental Models

Based on information processing and cognitive science theories, mental models are the cognitive organization of knowledge in long-term memory (LTM) developed through learning and experience (Chase & Simon, 1973; Chi et al., 1982; Gogus, 2013; Insch, McIntyre, & Dawley, 2008; Schack, 2004).  Mental models represent how individuals organize and implement knowledge, instead of explicitly determining what that knowledge encompasses.  Novice practitioners start with mental models consisting of the most basic elements of knowledge required, and their mental models gradually gain complexity and refinement as the novice gains practical experience applying those models in the real world (Chase & Simon, 1973; Chi et al., 1982; Gogus, 2013; Insch et al., 2008; Schack, 2004).  Consequently, achieving expertise is not simply a matter of accumulating knowledge and skills, but a complex transformation of the way knowledge and skill is implemented (Feltovich et al., 2006).  This distinction, between what the individual knows and how the individual applies that knowledge has theoretical as well as practical importance for use in human assessment.

Mental models capture important aspects that plagued prior attempts to assess human capital performance.  In contrast to prior assessment methods, differences in mental models propose to demonstrate differences in the way individuals apply knowledge cognitively, rather than differences in the knowledge itself  (Chi et al., 1982; Gogus, 2013; Insch et al., 2008).   The significance of these findings is the implication of a measurable basis for the difference in performance between expert and novice, substantiating mental models as the quintessential construct defining the difference between the knowledge an individual has versus how the individual applies that knowledge.

Evaluating mental models from a practical perspective, mental models clearly differentiate between expert and non-experts. Chase and Simon (1973) first theorized that the way experts chunk and sequence information mediated their superior performance. Simon and Chase found grand master chess players’ superior performance resulted from recalling more complex information chunks.  These authors demonstrated that both experts and novices could recall the same number of chunks, but the chunks of novices contained single chess pieces whereas the chunks of experts contained meaningful chess positions composed of numerous pieces.  Simon and Chase further showed this superior performance to be context sensitive and domain specific as grand masters were no better than novices at recalling random, non-game specific piece constellations and showed no better performance in non-chess related memory.  The domain dependency indicates mental models of performance are not universal predictors but have job-related specificity making them ideal for assessment.

The observation that expert and novices store and access domain-specific knowledge differently spawned research theorizing quantitative, measurable differences in knowledge representation and organization might differentiate expert performance from non-expert performance (Ericsson, 2006). This research continues to substantiate increased experience and practice as the driver in the development of larger, more complex cognitive chunks (Feltovich et al., 2006). Feltovich et al. (2006) noted this effect as one of the best-established characteristics of expertise and demonstrated in numerous knowledge domains including chess, bridge, electronics, physics problem solving, and medical applications.  Feltovich et al. suggested these changes facilitated experts processing more information faster, with less cognitive effort thus contributing to greater performance.

Evolution of Mental Model Evaluation

The conceptualization of evaluating expert performance in academic and business domains already indicates the importance of mental model differences (Chi et al., 1982; Insch et al., 2008; Jafari, Akhavan, & Nourizadeh, 2013).  The general acceptance of mental models as a critical discriminator of performance has driven a deeper focus on the nature and structure of these differences instead of the specific knowledge they represent (Gogus, 2013).  This evolution of mental model evaluation, from a theoretical construct to a quantitative measure, mirrors the evolution away from what individuals know, towards how individuals utilize that knowledge.

Studies of expertise and expert performance demonstrate the dramatic differences in the way experts and novices organize knowledge in complex physics problem solving a (Chi et al., 1982). Chi et al. (1982) utilized cluster analysis to show differences in the way experts and novices structure their knowledge; however, mental models were only one of several ways in which the authors analyzed expert and novice differences.

Acknowledgment of these differences in mental representations rationalized the use of mental models in constructing more traditional tacit knowledge measures (Insch et al., 2008). Insch et al. (2008) approached tacit knowledge measures through evaluation of the actions individuals performed, acknowledging tacit knowledge was inherently how individuals use knowledge, not necessarily what knowledge they had.  In taking this approach, the authors focused on the mental schemas that directed behavior instead of the antecedent values, beliefs and skills that contribute to performance.  The focus on schemas as the driving factor in performance is notable as divergent from prior tacit knowledge measures; however, Insch et al. did not attempt measuring and comparing resultant mental models explicitly.

More recently, Jafari et al. (2013) looked to elicit and visualize the tacit knowledge of Iranian autoworkers concerning their knowledge of organizational strengths.  The uniqueness of this study was the use of quantifiable measures of individual tacit knowledge for comparison between groups of individuals and purported experts, as well as the use of graphs to visualize the results for each group.  Jafari et al. stipulated differences in mental models as an indication of differences between novice and expert workers but focused on the content rather than the structure of the mental model.  The authors further operationalized the quantitative measures as differences in what the individuals knew, and not how they utilized or implemented the knowledge. This approach advanced the use of mental models in the identification of expert knowledge, yet failed to identify how these models differ regarding application or structure.

Other researchers focused more on the differences in comparative mental models than the specific knowledge represented within the models (Gogus, 2013).  In evaluating the applicability and reliability of different methods of eliciting and comparing mental models, Gogus (2013) suggested the theoretical and methodological approach to the analysis of mental models is independent of the domain of knowledge.  Gogus replicated and contrasted the use of two different methodologies for externalization and measurement of mental model differences.  Of particular note, the author focused on contrasting the features of mental models instead of on the specific knowledge, experience, attitude, beliefs, or values of participants.  These efforts further support differences in mental models as being more dependent on the tacit rather than explicit knowledge of the individual. Since mental models are inherently domain specific and often contain the same base explicit knowledge, structural differences in the mental models between experts and novices are more indicative of the differences in performance.

Research in the area of sports psychology has similarly focused on developing reliable means of differentiating mental models of individuals to differentiate performance and diagnose performance problems.  Distinct differences in the mental models between experts and novices have been documented across multiple action-oriented skills including tennis (Schack & Mechsner, 2006), soccer (Lex et al., 2015), volleyball (Schack, 2012), and golf (Frank et al., 2013; Land et al., 2014). Schack and Mechsner (2006) demonstrated how differences in the mental models of the tennis serve related to the level of expertise.  Lex et al. (2015) evaluated the differences in the mental models of team-specific tactics between players of varying levels of experience.  Less experienced players averaging 3.2 years of experience (n = 20, SD = 4.2) generated mental models viewing team-tactics broadly as either offensive or defensive.  More experienced players averaging 17.3 years of experience (n = 18, SD = 3.3), further differentiated offensive and defensive tactics into smaller groups of related actions.  For instance, more experienced players further segmented defensive tactics into actions for pressing the offense, and returning to standard defense.

Focus on the specific differences in the structure of mental models has not only proven effective in differentiating expert and novice performance but also provided insight into effective training regimens (Frank et al., 2013; Land et al., 2014; Weigelt, Ahlmeyer, Lex, & Schack, 2011).  Frank et al. (2013) compared the models of novice performers to those of experts prior to and following a training intervention.  The authors experimentally evaluated two randomly assigned groups of participants with no former experience in performing a golf putt.  With the exception of an initial training video provided to all participants, none received any training or feedback.  The experimental group participated in self-directed practice over a three-day period, while the control group did not practice at all.  Frank et al. found the mental models of participants subjected to practice evolved, becoming more similar to expert mental models than participants in the control group.  Since the formal knowledge of all participants remained the same, the outcome of this study further suggests the structure of individual mental models is dependent on the experience and tacit knowledge of the individual.

The Opportunity

The use of mental models to identify expertise shows great promise. Variations in mental model construction differentiate clearly between expert and novice performers across numerous domains of knowledge.  Furthermore, methodologies highlighting the structural differences between the mental models of experts and novices show promise in the development and evaluation of training regimens.  As a result, the development of human capital assessments based on the measurement of the structural differences between mental models represents a strategic opportunity for organizations to improve the quality of human capital selection as well as the development and assessment of existing human capital.

References

Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In Visual Information Processing (pp. 215–281). New York, NY: Academic Press, Inc. http://doi.org/10.1016/B978-0-12-170150-5.50011-1

Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7–75). Hillsdale: Lawrence Erlbaum Associates.

Ericsson, K. A. (2006). An introduction to Cambridge handbook of expertise and expert performance: Its development, organization, and content. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Feltovich, P. J., Prietula, M. J., & Ericsson, K. A. (2006). Studies of expertise from psychological perspectives. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Frank, C., Land, W., & Schack, T. (2013). Mental representation and learning: The influence of practice on the development of mental representation structure in complex action. Psychology of Sport and Exercise, 14(3), 353–361. http://doi.org/10.1016/j.psychsport.2012.12.001

Gogus, A. (2013). Evaluating mental models in mathematics: A comparison of methods. Educational Technology Research and Development, 61(2), 171–195. http://doi.org/10.1007/s11423-012-9281-2

Insch, G. S., McIntyre, N., & Dawley, D. (2008). Tacit Knowledge: A Refinement and Empirical Test of the Academic Tacit Knowledge Scale. The Journal of Psychology, 142(6), 561–579. http://doi.org/10.3200/jrlp.142.6.561-580

Jafari, M., Akhavan, P., & Nourizadeh, M. (2013). Classification of human resources based on measurement of tacit knowledge. The Journal of Management Development, 32(4), 376–403. http://doi.org/http://dx.doi.org/10.1108/02621711311326374

Land, W. M., Frank, C., & Schack, T. (2014). The influence of attentional focus on the development of skill representation in a complex action. Psychology of Sport and Exercise, 15(1), 30–38. http://doi.org/10.1016/j.psychsport.2013.09.006

Lex, H., Essig, K., Knoblauch, A., & Schack, T. (2015). Cognitive Representations and Cognitive Processing of Team-Specific Tactics in Soccer. PLoS ONE, 10(2), 1–19. http://doi.org/10.1371/journal.pone.0118219

Schack, T. (2004). Knowledge and performance in action. Journal of Knowledge Management, 8(4), 38–53. http://doi.org/10.1108/13673270410548478

Schack, T. (2012). Measuring mental representations. In G. Tenenbaum, R. Eklund, & A. Kamata (Eds.), Measurement in Sport and Exercise Psychology (pp. 203–214). Champaign, IL: Human Kinetics. Retrieved from http://www.uni-bielefeld.de/sport/arbeitsbereiche/ab_ii/publications/pub_pdf_archive/Schack (2012) Mental representation Handb

Schack, T., Essig, K., Frank, C., & Koester, D. (2014). Mental representation and motor imagery training. Frontiers in Human Neuroscience, 8(May), 328. http://doi.org/10.3389/fnhum.2014.00328

Schack, T., & Mechsner, F. (2006). Representation of motor skills in human long-term memory. Neuroscience Letters, 391(3), 77–81. http://doi.org/10.1016/j.neulet.2005.10.009

Weigelt, M., Ahlmeyer, T., Lex, H., & Schack, T. (2011). The cognitive representation of a throwing technique in judo experts – Technological ways for individual skill diagnostics in high-performance sports. Psychology of Sport and Exercise, 12(3), 231–235. http://doi.org/http://dx.doi.org/10.1016/j.psychsport.2010.11.001

Misconceptions about Certification

There seems to be wide misconceptions about what “Certification” and “Licensure” are all about.  Some see it as just the final part of an educational regimen.  Other’s see it as some kind of hurdle imposed by greedy organizations restricting access to some benefit.  These misconceptions and skewed perspectives, lead them to make demands affecting the very heart of what certification is all about and minimizing the value of the very certification they are working to achieve. First:

What Certification IS …

Certification is a legally defined classification stating a certifying body stands behind the capability of a certified individual to perform at some specific level of capability.  Certification is a very simple construct: it is the definition of a standard of performance, and a program assessing individuals to that standard.  That’s it, nothing more and nothing less.

Setting a standard is the first part of any certification.  Ideally, this standard is defined with help of people who perform the job.  In addition, a standard implies that everyone, no matter whether they have done the actual job for decades or participated in the development of the standard itself, must objectively demonstrate their capability in exactly the same manner.  Unless this standard is objectively applied equally to all individuals, it is not a standard.  Maintaining that standard is the foundation of certification value; it is what the certification stands for and how it should be used.

Creating an assessment of that standard is the second part of any certification.  This not only includes a mechanism assessing current capability, but also a program or process to ensure future capability.   Despite what many believe, creating an assessment is not a simple, ad hoc process where someone creates an assessment (test) and makes people take it to prove their ability.  It is, in fact, a highly rigours process backed by decades of scientific research on measuring cognitive ability (psychometrics), the sole purpose of which is ensuring the validity of the decisions made based on assessment performance.  It involves designing the assessment, job-task analysis of the job being assessed, formalized standards of item (question) writing, public beta-testing of items, psychometric evaluation of the item performance, assessment construction, and evaluation of appropriate scoring.  Done properly, it is time-consuming and costly (which is why some organizations don’t do it properly).

Finally, because knowledge and competence are perishable commodities, mechanisms must be put in place ensuring certified individuals remain capable in the future.  This is frequently done by limiting the length of time a certification is valid and requiring periodic re-certification (re-validation) of the individuals capability.  Other methods may include proving continued education and practice of the knowledge.  Regardless, the ongoing evaluation of certified individuals must adhere to the original standard with the same validity, or the standard no longer has value.

There is no hidden agenda to certification.  There is no conspiracy.  It is simply to establish a standard and assess individuals compared to that standard.

Misconceptions about Certification

Really, any belief beyond the design of a standard and the assessment to that standard, is a misconception about certification.  However, there are a number of misconceptions that often drive changes detrimental to the rationale of certification.  The most common ones relate to understanding what an assessment is for, training, and re-certification requirements.

Most people mistakenly assume the items within an assessment must represent the end-all-be-all of what someone should know.  They don’t understand that a psychometric assessment is not about the answers themselves, but the inferences we can make about performance based on those answers.  There is no way to develop an exhaustive exam of all the knowledge necessary to be competent and to deliver that exam efficiently.  However, we can survey a person’s knowledge and through statistical analysis infer whether they have all the knowledge necessary or not.  The answer to any specific question is less important than how answering that question correlates with competence.  Even a poorly written, incomplete, and inaccurate item can give us information about real world performance; in fact, evaluating how a candidate responds to such an item can be highly informative (although this is not a standard, intentional practice).  This focus on the items themselves, rather than the knowledge and competence the answers suggest, is what makes people incorrectly question the validity of the certification.

Similarly, many people think a certification should be able to be specifically taught.  As such, they believe a training course should be all that is necessary to achieve a certification.  However, this does not align with what we know about the development of human competence.  There is a big difference between knowing something, and knowing how to apply that knowledge competently.  Certification is an assessment of performance, not knowledge; and, as such, cannot be taught directly.  If someone can take a class and immediately achieve certification, either: A) the assessment does not evaluate actual performance; or, B) the course simply teaches the answers to the questions on the exam, rather than the full domain of knowledge.  In either case, you have biased the inferences made by the assessment.  Competence begins with knowledge, but must also have experience and practice.  This cannot be gained through a class, but only through concerted effort; you cannot buy competence.

Finally, many people also believe that once a certification has been achieved, it shouldn’t need to ever be evaluated again; or, that taking a course instead of an assessment should suffice.  The former belief simply ignores the fact performance capability is a perishable commodity: if you don’t use it, you lose it.  The latter once again confuses knowledge with performance.  How frequently this needs to happen, or whether continued education is sufficient to demonstrate continued performance is entirely dependent on the knowledge domain the certification attempts to assess.  In highly dynamic environments, this may need to be done much more frequently and rely more on assessments than in other domains; however, ongoing evidence of continued capability is a must if standards are to be maintained.

Leave Certification to the Professionals

The heart of the problem is that everyone seems to believe they are experts in the design of certification programs and assessments simply because they have participated in them.  The reality is that certification is a rigorous, research-based, and scientific endeavor.  The minimum requirement to be considered a psychometrician is a PhD; that’s a great deal of specialized knowledge most people do not have.  The decisions made are not arbitrary, nor are they made with the intention of anything other than maintaining the standard and making valid assessments of individuals according to that standard.

At the end of the day, the value of a certification is whether the people who achieve it can perform according to the standard the certification set forth.  If the certification cannot guarantee that, then it is not valid and has no value.  However, this requires people to actually understand what that standard is, what it means, and why it was created.   It requires people to accept there is a rigorous process accounting for all of the decisions and those decisions all support validity.  Finally, it requires people to understand that just because they may be experts in their field, they are not experts in certification.

 

 

 

 

 

What Makes an Expert, an Expert?

Re-post from LinkedIn April 28, 2016

Human beings have likely been trying to understand expertise since the first cave dweller wondered why Grog was so much better at hunting, or why Norg seemed to always know where the best berries were.   Efforts to identify, and more precisely to predict expertise have pretty much been ongoing ever since. It’s no wonder, since a McKinsey report showed that high-performers could generate significantly greater productivity (40%), profit (49%) and revenue (67%) depending on their role when compared to even average performers (Cornet, Rowland, Axelrod, Handfield-Jones, & Welsh, 2001). While we are still not very good at predicting future expertise, or even how to objectively quantify it, we have learned a few things along the way. Expertise is not necessarily an innate ability. Nor is expertise necessarily what you know but how you know it.

Scientific assessment of individual differences seems to have hit critical mass in the mid- to late- 19th century, culminating in the development of the general theory of intelligence (Spearman, 1904). Spearman was attempting to create a unified way of looking at and evaluating innate capability, sans training or experience. This idea that certain human beings were simply destined for greatness was the impetus for the intelligence testing that we still use today for assessing potential (e.g. IQ). While many people (including businesses) put a lot of stock into general measures of intelligence, it turns out that actual, real world performance is not simply a matter of innate ability. For instance, IQ measures proved to be useless in predicting the rankings of internationally ranked chess players. In fact, studies have shown intelligence measures to only account for between 4% and 30% of real world performance (Sternberg, Grigorenko, & Bundy, 2001). Even at the high-end of that range, more than two-thirds of the reason for an individual’s real world performance is unaccounted for by standard intelligence measures. Real world performance is more than innate ability, but the product of ability informed by experiential knowledge and skills.

The mid- 20th century ushered in the idea that, perhaps, expert performance was the result of specialized knowledge developed over time. Michael Polanyi famously defined tacit knowledge by suggesting we know more than we can tell (Peck, 2006; Polanyi, 1966). As opposed to explicit knowledge, which can be written down, easily expressed and taught, tacit knowledge remains elusive even to those who have it (Mahroeian & Forozia, 2012). While explicit knowledge is what we know, tacit knowledge is the ability to apply that knowledge successfully; experts exhibit some form of meta-knowledge enabling them to better apply their knowledge. Experts achieve automaticity in both their thoughts and actions, making complex processes appear effortless and simple. Yet, experts are generally unable to explain how they do this. The result is that experts appear to solve problems intuitively, not because they specifically know more, but because they know better.

One explanation of where tacit knowledge originates is through the development of superior mental models of domain knowledge. Research comparing the mental models of expert and novice practitioners show that experts organize their knowledge in ways uniquely different from novices (Chi, Glaser, & Rees, 1982; Gogus, 2013). This research substantiates that a principal difference between an expert and a novice is the structure of their mental models, not necessarily the contents of their knowledge. The mental models of expert practitioners appear to coalesce to a point of maximum efficiency regardless of how the skills develop (Schack, 2004). These efficient mental models allow experts immediate access to (more) knowledge and procedures relevant for efficient use in daily application (Feltovich, Prietula, & Ericsson, 2006). In short, experts generate the best solutions under time constraints, better perceive the relevant characteristics of problems, are more likely to apply appropriate problem solving strategies, are better at self-monitoring to detect mistakes and judgment errors, and perform with greater automaticity and minimal cognitive effort (Chi, 2006). Experts perform faster and more accurately with less effort.

A recent study comparing more-experienced and less-experienced soccer players utilized iris-scanning technology to make this point exceptionally salient (Lex, Essig, Knoblauch, & Schack, 2015). This study determined that while more-experienced and less-experienced players fixated on visuals of game situations for the same amount of time per pixel, more-experienced players focused on four specific aspects of the visual while less-experienced players fixated on many areas irrelevant to the decision-making process; the result was that more-experienced players made effective decisions much faster than their less-experienced counterparts. The point here is experts are capable of screening out extraneous information and focusing solely on the details that matter in order to make effective, efficient, and accurate choices. While all of the players had the same basic knowledge of the game, more-experienced players applied that knowledge more efficiently to make accurate decisions more quickly.

So, what makes an expert, an expert? Much like the number of licks to reach the center of a tootsie-pop, the world may never really know. Despite apocryphal notions, we don’t know how long it takes for someone to become an expert, or even if all individuals are capable of becoming experts. We don’t even have a universal means of determining if someone has truly become an expert or easily differentiating experts from novices objectively. What we do know is that expertise is not something you are born with and it is not something achieved simply by obtaining knowledge or training.  It is a metamorphosis from knowing what, to knowing how.

One might say that expertise is simply a state of mind.

References

Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7–75). Hillsdale: Lawrence Erlbaum Associates.

Cornet, A., Rowland, P. J., Axelrod, E. L., Handfield-Jones, H., & Welsh, T. A. (2001). War for talent, part two. McKinsey Quarterly, (2), 9–12. Retrieved from http://www.mckinsey.com/

Gogus, A. (2013). Evaluating mental models in mathematics: A comparison of methods. Educational Technology Research and Development, 61(2), 171–195. doi:10.1007/s11423-012-9281-2

Mahroeian, H., & Forozia, A. (2012). Challenges in managing tacit knowledge: A study on difficulties in diffusion of tacit knowledge in organizations. International Journal of Business and Social Science, 3(19), 303–308. Retrieved from http://ijbssnet.com/

Peck, D. A. (2006). Tacit knowledge and practical action: Polanyi, Hacking, Heidegger and the tacit dimension. ProQuest Dissertations and Theses. University of Guelph (Canada), Ann Arbor. Retrieved from http://search.proquest.com.library.capella.edu/docview/305337938?accountid=27965

Polanyi, M. (1966). The Tacit Dimension. Knowledge in Organizations. Butterworth-Heinemann. doi:10.1016/B978-0-7506-9718-7.50010-X

Spearman, C. (1904). “General intelligence,” objectively determined and measured. The American Journal of Psychology, 15(2), 201–292. doi:10.2307/1412107

Sternberg, R. J., Grigorenko, E. L., & Bundy, D. a. (2001). The predictive value of IQ. Merrill – Palmer Quarterly, 47(1), 1. doi:10.1353/mpq.2001.0005

Why you want to, but won’t, hire a Versatilist

The quality of an organization’s human capital is more important today than at any time before.  Global, dynamic markets eradicate the competitive advantages of capital, equipment, and land (Drucker, 1992; Friedman, 2006; Hayton, 2005; Teece, 2011).  Today, differentiation comes from combining undifferentiated inputs and resources in unique ways (Dutta, 2012; Reeves & Deimler, 2011; Teece, 2007, 2011, 2012; Teece, Pisano, & Shuen, 1997). As such, the source of competitive differentiation and strategic value is not having superior resources, but the skill and knowledge necessary to innovate.  One way to describe this organizational ability is dynamic capabilities (Teece, 2012). Dynamic capabilities characterize the organizational ability to sense and seize new opportunities and transform the organization, maintaining a competitive position.  Organizations with strong dynamic capabilities change and adapt to dynamic markets, are strong innovators, and build lasting strategic differentiation.  The only place this knowledge and skill resides is within the individuals working for the organization: human capital (Blair, 2002; Ployhart, Nyberg, Reilly, & Maltarich, 2014).

If we take the notion of dynamic capabilities and apply it to a person, instead of an organization, you get versatilists.  Versatilists are wired to sense and seize new opportunities, leverage new skills and abilities, and innovate who and what they are.  They are always changing and adapting to the world around them to become experts in new areas.  They don’t have access to different knowledge or methods of learning than other people, but they combine them in new ways to create new versions of themselves.  If organizations need dynamic capabilities to innovate and be successful, who better than versatilists to drive that effort.  This is why organizations should identify and recruit versatilists as employees.

Unfortunately, current recruiting and hiring strategies are ill aligned to this goal. Just look at your average Sr. level job description: 5 -7 years doing one thing with 10+ years in the same industry, with the same focus; another: 10 years in this job role, plus 5 years in specific industry. The job descriptions go on to list several dozen areas of knowledge and experience necessary to be considered a good fit.  These descriptions will use terms like “successful track record of”, “expertise in”, and “demonstrated experience with”. While this likely doesn’t sound out of place to many, especially those in HR and recruiting, it puts the job in a nice, little box tied with a bow.  The versatilists will rarely look twice for a couple of reasons.

First off, after 5-7 years doing the same thing, most versatilists are ready for the next challenge, not the next opportunity to do the same thing. The industry experience is less of an issue (although it’s still a bad way to get new ideas into your organization).  Versatilists don’t just adapt and change because of external forces; we’re not forced to go down a different path. We choose to do new things in new ways. There is an internal drive to know more, to do more, and to do it better.  Once a versatilists has become an expert in a role, we see little opportunity for growth, either personal or professional, and are naturally attracted to the next opportunity.

Second, unlike a generalist who tends to oversell their experience, versatilists, having become experts, generally undersell.  This is the Dunning-Kruger Effect in action (Dunning, Johnson, Ehrlinger, & Kruger, 2003; Kruger & Dunning, 1999).  According to this research, people tend to estimate their knowledge on any topic as at, or slightly above average.  Those with the least amount of actual knowledge overestimate grossly what they know (and don’t know they are doing it).  However, this works with experts as well, who underestimate their knowledge by assuming it is also just at, or slightly above average (this is sometimes referred to as imposter syndrome).  Because versatilists become experts in each of their chosen areas, even if you ask for “expertise” in that specific area, they will not feel qualified generally. This is further compounded when the job description suggests the candidate should be competent in dozens of areas.

Consequently, organizations limit their ability to hire versatilists the minute they draft a job description, making themselves unattractive to the very human capital they should really want.  Organizations cannot become innovative or develop dynamic capabilities, and yet hire based on check boxes and job descriptions of what the job has always been.  Instead, organizations should be hiring the people that can adapt and change the job to what it needs to be tomorrow.  Unless you change the way you recruit and hire, you’re more likely to hire someone without the skills you thought you needed and no capacity to develop the skills you really need.

 

References

Blair, D. C. (2002). Knowledge Management: Hype, Hope, or Help? Journal of the American Society for Information Science & Technology, 53(12), 1019–1028.

Drucker, P. F. (1992). The post-capitalist world. Public Interest, 109(Fall 1992), 89–101. Retrieved from http://www.nationalaffairs.com/

Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science, 12(3), 83–87. http://doi.org/10.1111/1467-8721.01235

Dutta, S. K. (2012). Dynamic capabilities: Fostering ambidexterity. SCMS Journal of Indian Management, 9(2), 81–91. Retrieved from http://search.proquest.com/

Friedman, T. L. (2006). The world is flat: A brief history of the twenty-first century. New York, NY: Farrar, Straus and Giroux.

Hayton, J. C. (2005). Competing in the new economy: the effect of intellectual capital on corporate entrepreneurship in high-technology new ventures. R&D Management, 35(2), 137–155. http://doi.org/10.1111/j.1467-9310.2005.00379.x

Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It : How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated. Journal of Personnality and Social Psychology, 77(6), 1121–1134. http://doi.org/10.1037/0022-3514.77.6.1121

Ployhart, R. E., Nyberg, A. J., Reilly, G., & Maltarich, M. a. (2014). Human capital Is dead; Long live human capital resources! Journal of Management, 40(2), 371–398. http://doi.org/10.1177/0149206313512152

Reeves, M., & Deimler, M. (2011). Adaptability: The new competitive advantage. Harvard Business Review, 89(7/8), 134–141. Retrieved from http://hbr.org/

Teece, D. J. (2007). Explicating dynamic capabilities: the nature and microfoundations of (sustainable) enterprise performance. Strategic Management Journal, 28(13), 1319–1350. http://doi.org/10.1002/smj.640

Teece, D. J. (2011). Dynamic capabilities: A guide for managers. Ivey Business Journal Online, 1. Retrieved from http://search.proquest.com/

Teece, D. J. (2012). Dynamic Capabilities: Routines versus entrepreneurial action. Journal of Management Studies, 49(8), 1395–1401. Retrieved from 10.1111/j.1467-6486.2012.01080.x

Teece, D. J., Pisano, G., & Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18(7), 509–533. http://doi.org/10.1016/b978-0-7506-7088-3.50009-7

Three Reasons Certification is Better than College.

Re-post from LinkedIn – April 8, 2016

Much of the national debate around the value of a college education seems to revolve around the cost of college, particularly in finding ways of making college more affordable and accessible to more people. This frame of reference assumes that a college education is the only means of post-secondary training and education that has value; this is simply not true. According to an analysis of employment during the Great Recession, 4 out of every 5 jobs lost were held by those without any formal education beyond high school; those without post-secondary training were more than three times as likely to lose there jobs than those with even “some college” (Carnevale, Jayasundera, & Cheah, 2012). This, by itself, suggests that even minimal post-secondary training can garner significant benefit; having a job is preferable to not having one. One way to accomplish this is through industry-based certifications (IBCs).

IBCs offer a number of advantages for improving an individual’s employability over simply making college more affordable and accessible. First, in regards to affordability and accessibility, IBCs offer greater return on investment (ROI) than traditional college education. In part because of their far lower time and money commitment, IBCs also provide a more flexible solution either to replace college, or to aid in preparation for later college education. Finally, the dynamic nature and industry relevancy of IBCs provide stronger signals to employers concerning an individual’s ability to actually do the job they need today, rather than months or years down the road. The strong case for IBCs begins with simple economics.

Direct ROI

Given the focus on ROI of post-secondary education, a comparison of the ROI of Certification Earnings Premiumcertification versus a Bachelor’s degree seems relevant. According to the U.S. Census Bureau (Ewert & Kominski, 2014), with the exception of a Master’s degree, there is an earnings premium for achieving certification or licensure regardless of education level. This premium predominantly benefits those with less post-secondary educational investment (see Figure 1). While it is true the earnings premium for having a Bachelor’s degree is much greater (see Figure 2), this is not a measure of return on investment. Return on investment is a measure of what you get for what you put in; i.e. ROI is the amount you can expect to get back for every dollar spent. This is where the ROI of certification is substantially better.

Assuming a cost of $40,000 for a Bachelor’s degree (probably a low estimate) and a cost Education Earnings Premiumof $5,000 to achieve certification (probably a high estimate), the ROI of achieving certification for someone with only a high school education is 2.3 times that of achieving a Bachelor’s degree (see Figure 3). Furthermore, this is just a starting point as it doesn’t account for differences in earning while those achieving a Bachelor’s degree remain in school or the cost of interest on student loans for college tuition. The fact is, certification provides individuals an extremely efficient mechanism to improve their earnings potential, and achieve the post-secondary credentials that improve their ability to get and keep a ROI of Certificationjob, even during tough economic times. The fact that IBCs add value to both those without other post-secondary education as well as those with, also demonstrates the greater flexibility of certifications.

Flexibility

One of the challenges to simply making a college education more affordable (or free), is that cost is not the sole factor contributing to non-participation or non-completion. An analysis of college completion statistics showed a 7% difference in achieving a Bachelor’s degree between students who complete high school with a 3.0 GPA versus those with a 3.5 GPA (Rose, 2013). Rose also reported that family and work responsibilities significantly affect the chances of completing a degree program. In other words, while the cost of a college education might inhibit individuals from starting a degree program, individual preparedness and the time commitment necessary to complete a degree are significant contributors to whether individuals ever actually graduate and garner the benefits. This is likely to be particularly true for low-income or disadvantaged students. IBCs provide more flexibility to address these challenges.

Firstly, IBCs have significantly lower time commitments associated with their completion, making it that much easier for students who must also maintain family and work obligations to complete the requirements for certification. Many IBCs do not even require formalized classes or specific training, allowing individuals to self-study as they are capable or as life permits. Finally, most IBCs award credentials, not based on having completed a regimented program of study, but upon the passing of competency-based exams. This means that students can take as much time, and as many attempts, as necessary without suffering negative consequences; it is not a one-time deal, thus providing greater chances of ultimate success. This applies to both those without any other post-secondary training as well as those with degrees who are simply looking for additional earnings potential.

Secondly, IBCs may provide students not ready for a formal degree program with the knowledge and skill to prepare them for a future degree. IBCs can provide students with exposure to a field of study without the time and financial commitment associated with a formal degree program, reducing the costs associated with choosing a career they ultimate find unsatisfactory or unfulfilling. In addition, this additional knowledge and skill may give students the confidence and ability necessary to complete degrees they would otherwise have been unprepared for.

Not everyone has the time, or capability to commit to formal degree programs. This has a much larger effect on educational outcomes than the cost; simply reducing the cost or providing universal access does not address either of these challenges. IBCs fill a gap between the demands of formalized postsecondary training, and the real world needs of students just trying to stay a head in a highly competitive marketplace while simultaneously making ends meet (Claman, 2012). In addition, IBCs are increasingly more valuable to employers.

Stronger Employability Signals

“The value of paper degrees lies in a common agreement to accept them as a proxy for competence and status, and that agreement is less rock solid than the higher education establishment would like to believe” (Staton, 2014, para. 3).

Despite the nearly $800 billion dollars spent each year in the United States for human capital development beyond primary and secondary education, nearly 70% takes place outside of four-year colleges and universities; of that, U.S. employers spent almost $460 billion on formal and informal employ training alone (Carnevale, Jayasundera, & Hanson, 2012). According to the Economist, only 39% of hiring managers feel college graduates are ready to be productive members of the workforce (“Higher education: Is college worth it?,” 2014).  The Economist further points out the skill gap between college degreed applicants and the needs of employers has left 4 million jobs unfilled. It is no wonder employers are beginning to question whether degrees are appropriate proxies for real world competence; and, some are even seeing advance degrees as a negative hiring signal requiring more cost with little benefit (Staton, 2015).

“The world no longer cares about what you know; they world only cares about what you can do with what you know” (Tony Wagner as quoted by Friedman, 2012, para. 11).

The hands-on, competency-based aspects of IBCs not only create value for individuals directly, but indirectly by providing stronger signals to employers about the actual competence of job candidates. The dynamic and flexible nature of IBCs make them a better reflection of current industry standards and competence even in rapidly changing industries (Carnevale, Jayasundera, & Hanson, 2012). Perhaps even more important, the standards and competency-based testing utilized in IBCs improves the ability to objectively compare applicants, something that has proven extremely unreliable for post-secondary metrics like GPA (Carnevale, Jayasundera, & Hanson, 2012; Swift, Moore, Sharek, & Gino, 2013). IBCs provide employers with highly credible evidence of applicant’s ability to actually do something with their knowledge, not just their ability to know something.

IBCs are increasingly embraced by employers as a more reliable and valid indicator of candidate competence and questioning the value of traditional post-secondary indicators (Carnevale & Hanson, 2015). Because IBCs are, by definition, industry-based, applicants holding IBCs are more likely to have relevant, up-to-date skills meeting national, or international standards. IBCs are not only easier to evaluate, but also provide strong indicators that a prospective applicant will not need additional employer-based training before becoming productive. This is likely why even holders of advanced professional degrees are paid premiums for also having IBCs (Figure 1).

Conclusion

The debate about the current state of education in the United States is a worthwhile discussion, perhaps even a critical discussion in light of the challenges facing us. The problem is the single means of post-secondary education (four-year degrees) that dominates the debate and a singular focus on the cost of educating to this level. This debate fails to account for the many other factors affecting student outcomes, and the actual needs of employers. The reality is that advanced economies are not dominated by high-volume, low-value production, but low-volume, high-value production (Friedman, 2012), and the demand for “middle-education” jobs is growing and will continue to grow for many years (Carnevale, Jayasundera, & Hanson, 2012).   Without addressing these realities, we are only perpetuating a divide between those with degrees and those without, while still failing to meet the needs of business. There will always be a need for formal degrees, but that does not make them the panacea for all people and for all jobs.

At the end of the day, credentialing is an attractive option for anyone looking to improve their employment options.  IBCs provide a greater ROI, in a shorter amount of time than formal degrees.  The flexibility and less structured design of IBCs  make them easier to obtain successfully, especially for students either unprepared for, or unable to commit to formal programs.  Furthermore, IBCs provide strong employment signals to potential employers about the individuals ability to contribute on day-one of employment.  In many cases, the ROI, the flexibility, and the strong employment signals attributed to IBCs may very well be a better option than college; in other cases, IBCs may be an essential stepping-stone to that first degree by providing the skills, and the additional income, necessary to commit to obtaining a formal degree.  AND, if you already have a bachelor’s degree, these same benefits await you compared to getting a graduate degree.  Certification may very well be better than college to many.

NOTE: Anyone interested in exploring how competency-based credentialing is a critical component of the future of higher education should investigate WorkCred (http://www.workcred.org/), a non-profit organization working to elevate the visibility of credentialing as an essential ingredient in the future of human capital development in the 21st century. The author is not affiliated with WordCred.

References

Carnevale, A. P., & Hanson, A. R. (2015). Learn & earn: Career pathways for youth in the 21st century. E-Journal of International and Comparative Labour Studies, 4(1). Retrieved from https://cew.georgetown.edu

Carnevale, A. P., Jayasundera, T., & Cheah, B. (2012). The college advantage: Weathering the economic storm. Retrieved from https://cew.georgetown.edu/

Carnevale, A. P., Jayasundera, T., & Hanson, A. R. (2012). Career and Technical Education: Five Ways that Pay. Retrieved from https://cew.georgetown.edu/

Claman, P. (2012). The skills gap that’s slowing down your career. Harvard Business Review. Retrieved from http://hbr.org

Ewert, S., & Kominski, R. (2014). Measuring Alternative Educational Credentials: 2012, (January), 14. Retrieved from https://www.census.gov/

Friedman, T. L. (2012, November 17). If You’ve Got the Skills, She’s Got the Job. The New York Times. New York, NY. Retrieved from http://www.nytimes.com/

Higher education: Is college worth it? (2014, April). The Economist. doi:Article

Rose, S. J. (2013). The Value of a college degree. Retrieved from http://cew.georgetown.edu/

Staton, M. (2014). The degree is doomed. Harvard Business Review. Retrieved from https://hbr.org/

Staton, M. (2015). When a fancy degree scares employers away. Harvard Business Review. Retrieved from http://hbr.org/

Swift, S. A., Moore, D. A., Sharek, Z. S., & Gino, F. (2013). Inflated applicants: Attribution errors in performance evaluation by professionals. PLoS One, 8(7). doi:http://dx.doi.org/10.1371/journal.pone.0069258