You may not be familiar with the “Dunning-Kruger Effect”; or, you may have only heard the colloquial explanation that “stupid people are too stupid to know they’re stupid”, most humorously explained by Monty Python alum John Cleese here.
In reality, if you read the Wikipedia entry or the actual research, what Dunning and Kruger discovered is that human beings have the tendency to rate our ability, in almost anything, to be at, or slightly above, the average of all people. This is not only an obvious impossibility, it also has some interesting ramifications.
The first, oft repeated, implication is that those with the least capability tend to overestimate their capability the most. That is to say, if we assume 50% is the “average ability” across the population, those with 0% actual capability will overestimate their ability by 50%, (or more) while those with 40% only overestimate by 10%. One explanation for this is that the skills necessary to evaluate capability are exactly the same skills necessary to have the capability; i.e., if you don’t know what you are doing, it is difficult to evaluate that you, or someone else, is doing it wrong. My favorite example of this would be something like English grammar or punctuation: if you don’t have a firm grasp of it, it is impossible for you to evaluate how well you, or someone else, is performing. You must know, in order to evaluate. This is where the “too stupid to know” comes from.
The second, much less discussed, implication is that those with the most capability tend to underestimate their knowledge and competence. Back to the 50% scale, if someone actually performs at the 80 or 90% level, they tend to severely underestimate their performance. This is frequently cited as a contributing factor to imposter syndrome, where those with superior capability don’t necessarily believe they are superior. I attribute this to the colloquial definition of an expert as someone who knows more and more, about less and less (purportedly coined by one of the Drs’ Mayo of Mayo Clinic fame). An extension of this says that an expert is someone who knows more and more, about less and less, until they know absolutely everything about nothing. While this was likely meant to be more humorous than anything, there is a certain kind of meta, philosophical element to it as the process of discovering more and more about an increasingly smaller area of expertise also has the tendency to make it obvious how little you really know about anything else. Experts, while becoming more knowledgeable about their area of expertise, become increasingly cognizant of how little they really know elsewhere.
In either of these situations, overestimating or underestimating, the challenge is that self-reported capability is a very poor predictor of actual ability; and, if you really need an expert because you aren’t one, it is very unlikely you will be able to determine if someone else is one or not.
Hedging Your Bets
Why am I going on about the Dunning-Kruger effect? I point out this well-known characteristic because it touches on my area of expertise … determining the best way to assess expertise, particularly when it comes to augmenting your organization’s capabilities; i.e. this is something we need to think about when we hire people. We need to take this into account and develop strategies to “hedge our bets”.
While resumes are useful, we all know that just because you’ve done something in the past, doesn’t mean you are actually any good at it; and, resumes, although not necessarily outright false, are generally over inflated. Some of this is smart marketing on behalf of the candidate, but some may very well be that the candidate actually believes they are more adept than they are. On the flip side, that expert you’re looking for may be a lot less comfortable touting expertise they don’t feel they actually have. Resumes and interviews are useful, but woefully inadequate and imprecise.
One way to address this is to ensure that the screening/interview process involves some kind of valid psychometric assessment of ability (like respected certifications and licensure) and/or the direct involvement of someone who you know has the appropriate skills to assess the candidate’s ability (if you can find one). You can’t rely on self-reported capability, and you can’t expect someone without that capability to evaluate candidate’s capability … even in the screening process.
Another, perhaps easier, way to hedge your bets is to broaden your horizons. When we post job opportunities, we frequently over estimate the skills required, producing a “wish list” that values “specific” experience over diversity of experience (as I’ve discussed here: Would You Hire Me?). However, if we limit ourselves to one dimension, it can be hard to determine what a candidate’s true capabilities are. If, instead, we look for people who have been successful or demonstrate knowledge of multiple domains, backed by work experience, we may get a better estimation of their knowledge on specific domains. That is to say, a Versatilist, with a broader set of knowledge in multiple domains, is more likely to underestimate their specific domain knowledge than overestimate it. If this doesn’t cause you to overlook these candidates, the only downside is that you may get more than you knew, not less.
Don’t be too stupid to know you’re stupid
The Dunning-Kruger effect is just another factor hindering employers from finding the best people. We all think we are better at everything, including evaluating prospective employees, than we generally are; and, the very people we want are likely to be overlooked because they undersell their capabilities. Using other, valid qualitative criteria like certifications certainly helps, and including experts, instead of AI engines and unqualified HR personnel, in the screening and interview process would also be beneficial.
For my money, until I find a way to fund continued research into better ways, I’ll continue to look for those Versatilists out there who have knowledge and experience, and likely undervalue their true capability.