Credentialing

Standardized Testing in Context of Diversity, Equity, and Inclusion: We need more, not less.

There was recently an article in the New York Times concerning the ongoing debate over standardized testing, specifically about the use of SAT and ACT testing in the college admissions process. The use of these tests has been debated for years, but during the pandemic, when in-person testing became impossible, many educational systems decided to remove this requirement and have simply not reinstated them. 

The point of the article was that, despite many concerns the exams themselves were biased in any number of ways, the use of standardized testing scores in institutions requiring them has actually increased the diversity of the student population (across all factors, race/sex/socioeconomic/etc.) over virtually any other means of admission standard.  In addition, the article points out that what many people see as bias in the tests themselves is likely misplaced: the tests accurately predict what they are intended to predict regardless of race and economics, namely, will the student do well in college or not. 

Herein lies, perhaps, one of the most misunderstood aspects of standardized testing … they can only reliably predict what they are intended to predict and nothing else. As a practicing academic who spends much of his days working on standardized testing programs in the technology industry, I am constantly confronted with these misconceptions. 

What are Standardized Tests?

The first thing to understand is what, exactly, standardized testing actually is.  In short, standardized tests are specifically built to predict some aspect of the individual taking the assessment. In the case of the ACT and SAT exams, they are designed to predict how well the individual will do in the university setting, and nothing more. In addition, by “predict”, I mean that they make a statistical inference, not an absolute determination, as they are based on statistical science which describes a “group”, not any one individual. They do not specifically measure real world capability. They do not measure overall intelligence. They only measure/predict what they are designed to do. 

Two key aspects of this are “Validity” and “Reliability”. Validity is a measure of how well an assessment does what it says it does. Does a high score on the exam actually predict what was intended, or more succinctly “are we measuring what we said we were”.  Reliability is a measure of whether the same individual, taking the same assessment, consistently scores the same without any other changes (like preparation, training, etc.); i.e., does the test make the same prediction every time it is used without any other factors affecting the results. 

Despite what critics say, the SAT and ACT exams have both been proven to be valid predictors of what they measure with high reliability.  My score will accurately (withing statistical deviations) predict my ability to be successful in college and my score will be fairly consistent across multiple attempts unless I do something to change my innate ability.  As the NYT article points out, this remains true: the higher you score on these exams, the better your academic results in post-secondary institutions.  The fact there is a significant discrepancy in scores based on race, socioeconomic situation, or any other factor is, frankly, irrelevant to the validity and reliability of the exam. Using the results of these exams in any context other than how they were designed is an invalid use.

The Legacy of Mistrust

These basic misunderstandings of standardized testing breeds mistrust and suspicion in what they do and how they are used. This is nothing new and likely stems for the development and use of assessments from the past. The original intelligence quotient (IQ) test developed around the turn of the 20th century is subject to the same issues, including suggestions of racial and socioeconomic bias. In part this is because the IQ test is not actually a valid predictor of intelligence or the ability to perform successfully, but like the SAT and ACT exams, research has showed it is a predictor of success in primary and secondary educational environments.  Unfortunately, this was not fully understood when the assessment was born and IQ has been misused in ways that actually have contributed to societal bias.  This is the legacy that still follows standardized testing.

It is bad design, the misuse of standardized testing results, and the misinterpretation of those results that causes such spirited debate.  In the case of the original IQ test, it was originally purported to determine innate intelligence, but was actually a predictor of primary/secondary educational success. Furthermore, research suggests that IQ is a poor predictor of virtually anything else, including an individual’s ability to succeed in life. This is a validity issue; meaning that it did not measure what it purported to measure. Due to the validity issue, IQ testing was then misused to further propagate racial and socioeconomic inequity, by suggesting that different races, or different classes were just “less intelligent” than others, prompting stereotypes and prejudice that simply wasn’t founded. 

Given this legacy, it is easy to understand why many mistrust standardized tests and believe they are the problem, rather than a symptom of a larger problem.

The Real Issue is NOT Standardized Testing

The conversation around standardized testing has suggested the reason for racial and socioeconomic disparity is due to bias within the testing. However, if we can accept standardized tests (at least ones that are well designed to have validity and reliability) simple make a prediction, and that the SAT and ACT, in particular, make accurate predictions of a student’s ability to succeed in post-secondary education, the real question is why is there a significant disparity in results based on race and socioeconomic background? Similarly, why did the original IQ testing accurately predict primary/secondary educational outcomes, but also suffer from the same disparity? The real question is: Why can’t students from diverse backgrounds equally succeed in our education system?

The answer is rather simple and voluminous SAT and ACT data clearly indicate this: there is racial and socioeconomic disparity built into the educational systems. This is a clear issue of systemic bias; your chances of success within the system are greatly affected by race and socioeconomic background. Either what we are teaching, or how we are evaluating performance, is not equitable to all students. This is the issue we should be having conversations about, research conducted, and action taken. Continuing the debate, or simply eliminating standardized testing, is not going to affect the bigger issue. If anything, eliminating SAT and ACT testing will help hide the issue because we will no longer have such clear, documented evidence of the disparity. I don’t want to start any conspiracy theories, but maybe this is one reason so few educational systems are willing to reinstate ACT and SAT testing as part of their admissions requirements, especially when the research suggests they are better criteria for improving diversity than other existing means. They may be imperfect, but it is not the assessment’s fault, it is the system’s fault.

How to Improve? 

First, I want to be clear: I don’t have any specific, research-based solutions. So, before I offer any suggestions based on my years of being in the educational system as a student, my years of raising children going through the educational system, and over a decade working with standardized test design and delivery, I want to emphasize that the best thing we can do to improve is simply to change the conversation away from the standardized tests and focus on the educational system itself. We need research to determine where the issue actually exists; is it what we teach, or how we measure performance?  That MUST be the first step.

That being said, when it comes to “how we measure performance”, based on my background, education, and experience, I’m going to make a radical suggestion: more standardized testing. I know, I know. Our students are already inundated with standardized testing, but hear me out. While standardized test are frequently used in our education system, they are rarely used to measure an individual student’s performance when it comes to grades (the ultimate indicator of success within the system), but as an assessment of the overall school’s performance. My suggestion is that these standardized tests may be a more equitable way to evaluate performance for the individual as well.

From an equity standpoint, while there are some proven correlations between individual test scores on the US National Assessment of Education Progress (NAEP) assessment and those individual’s ACT/SAT scores, the correlations were not perfect. In addition, correlations were weaker across racial/ethnic minorities and low-income students. NAEP scores have also shown positive correlation with post-secondary outcomes, although they were not the only factor. Finally, since the NAEP assessment began in 1990, the disparity in scores based on racial and socioeconomic differentiation has significantly diminished. This suggests the NAEP assessment may actually be better at determining the student’s capability, rather then just predicting their post-secondary success, while also having some ability to predict success. Yet, NAEP assessments are not used in any way to actually grade the student’s performance. At the very least, NAEP results may be a viable way to augment current admissions and similarly reduce the racial and socioeconomic disparity. They may also be a better way to measure “success” in the primary/secondary educational system than current methods, leading me to my next point.

The reality is that well-constructed, standardized assessments with proven validity and reliability are NOT how most of our students are evaluated today. Across the primary, secondary, post-secondary, and graduate levels, our students are routinely evaluated based on individual teacher developed assessments and/or subjective performance criteria. Those teachers are inadequately trained in how to design, make, and validate psychometrically sound assessments (with validity and reliability); and, as such, the instruments used to gauge student performance routinely do not meassure that performance. Without properly constructed assessments, our students are more likely to be measured on their English proficiency, cultural background, or simply whether they can decipher what the instructor was trying to say, rather than the knowledge they have about the topic. Subjective evaluations (like those used for essay responses) are routinely shown to be biased and rarely give credence to novel or innovative thought; even professional evaluators trained to remove bias, like those used in college admissions, routinely make systematic errors in evaluation.  Subjective assessments, in my personal and professional opinion, are fraught with inequity and bias that cannot be effectively eliminated. Furthermore, I can personally report that educational systems do not care, if the reaction to my numerous criticisms is any indication.  Standardized testing would address this issue and, as we’ve seen with the NAEP, likely do a much better job of making more equitable and fair performance assessments across students.

On top of that, our students’ performance is also often judged on things like home work, attendance, and work-products created in the process of learning, rather than on what they have learned or know. This misses the point, and likely exacerbates the disparity in “success” in our educational system. Single-parent and low-economic homes, which also tend to be more racial segmented, can have dramatic effects on these types of assessments. First, you are out-sourcing the learning to an environment you cannot control where some students may gain experiential knowledge growth, but others cannot; second, you compound that by further penalizing those who cannot with poor grades. While some students/parents (regardless of situation) may still engage in learning/experience outside the classroom, making it mandatory and grading on it likely contributes to the disparity giving those students in the best situations with an unfair advantage. Finally, from my own research into the development of expertise, I know that not all students require as much experiential learning to master the knowledge. The development of expert knowledge is idiosyncratic, some require more while some require less. As such, we should not be measuring performance on how the knowledge is obtained, and focus more on whether they have it or not. 

I know the legacy of mistrust will make this a hard stance for people to support, but the use of standardized testing for assessing student performance would address a number of significant issues in current practices. It can be less biased, provide more consistent results across schools, and if used in place of subjective or other non-performance criteria, be a more accurate reflection of student capability.

Conclusion

Standardized testing, especially the behind-the-scenes work done to properly create them, is a mystery to most people. When you add historical misuse and abuse of standardized testing, it is easy to see why many demonize them and question the results. The reality though, is that well constructed assessments, used properly, can not only help us uncover issues in society, but also help us address those issues. The data on SAT/ACT scores, both their ability to predict academic performance, as well as the disparity in scores across racial and socioeconomic background are a clear signal to the real problem: the racial and socioeconomic bias built into the education system. The education systems definition of “success”, or how it is determined is clearly biased. As such, we should not push to eliminate standardized testing, but look to see how we can improve our definition and measurement of success by doubling down on standardized testing instead of how we do it today.

Google Does Not Obviate “Knowing”

There is a strange notion making the rounds of social media in various forms, used to argue against traditional learning and assessment standards.  This reoccurring theme suggests the ubiquitous ability to leverage Google search, Wikipedia, or other online resources to find answers obviates the need to learn anything for yourself.  I.e., if we need to know something, we can just look it up in real-time and don’t need to waste time learning this information before we need it.  This theme has come up in discussions of our educational system curriculum, the supposed uselessness of standardized testing, and even in employee assessment criteria.

The Internet was never intended to be a replacement for independent knowledge.

Perhaps this is a special case of the Dunning-Kruger effect (Dunning, Johnson, Ehrlinger, & Kruger, 2003; Kruger & Dunning, 1999), but there are at least two clear reasons why access to knowledge is not equivalent to actually knowing it.  The first is a complete disconnect from the way human beings develop skill and competency.  The second is the assumption real-time knowledge, although ubiquitous, is accurate and will always be available.

Having Facts is Not “Knowing”

The most incongruous part of this idea is the assumption that knowledge is the result of just having a bunch of facts.  Thus, if you can just look up the facts, you have knowledge.  Unfortunately, unlike in the Matrix, human beings cannot simply download competence and expertise.

Learning something, and becoming good at it, is a process of building mental models on top of the foundation of rote facts

The study of experts and expert knowledge has well established the difference between experts and novices is not in what they know (the facts), but in how they apply those facts. It is based on how each fact fits with other facts or other pieces of knowledge. Expertise is the result of a process of integrating facts, context, and experience together and defining more refined and efficient mental models (Ericsson, 2006).  Learning something, and becoming good at it, is a process of building mental models on top of the foundation of rote facts.  This cannot be done without internalizing those facts.

In addition, returning to Dunning-Kruger, without building competence, individuals are incapable of discerning the veracity of individual facts.  Our ability to understand whether information is accurate, or of any substance, results from being able to rectify new information with our existing mental models and knowledge.  Those with less competence are the most unable to evaluate this information making them the most susceptible to not only accepting incorrect information as fact, but also of developing mental models incorrectly reflecting reality.

Limits of Ubiquitous Knowledge Access

Although those of use living in developed economies take ubiquitous access to knowledge for granted, this is not the case for all human beings, nor is it guaranteed to always exist.  It is estimated only about 50% of the world’s population is connected to the Internet, over two-thirds of which are in developed economies.  Even these figures bear further investigation, as those in developing countries with Internet access are far more likely to be connected by slower, less reliable means keeping their access from being truly ubiquitous.  Furthermore, while China contributes significantly to the world’s total Internet users, the Chinese government does not allow full, unrestricted access to the knowledge available via the Internet.  This leaves the number of people with true, ubiquitous access well below 50% of the population.

Even for those of us fortunate enough to have nearly ubiquitous access to an unrestricted Internet of knowledge, access is fragile.  Power outages as a result of simple failure, natural events, or even direct malice, can immediately render information inaccessible.  Emergency situations where survival might rely on knowledge also often exist outside the bounds of this seemingly ubiquitous access. Without a charge, or cellular connection, many find themselves ill-equipped to manage.

Dumbing Down our Society

The idea that access to knowledge is the same as having knowledge portends a loss of intellectual capital.  Whereas societies in the past have maintained control by limiting access to information, we are creating a future where control is maintained by delegitimizing and devaluing the accumulation of knowledge through full access to information.  We are positioning society to fail in the future because they will have not only become dependent on being spoon-fed information instead of actual learning, but will have also lost the ability to differentiate fact from fiction.

Not only is the idea that access to knowledge equates to having knowledge founded on shaky foundations lacking any kind of empirical basis, it undermines the actual development of knowledge

Although it would be nice to assume this is a dystopian view of the future, we are already seeing the effects of this process.  As social media becomes increasingly the way our society views the world around us, we can already see how ubiquitous access to information is affecting our perceptions of the world around us.  Without the ability to think critically, something only developed through the accumulation of knowledge and experience, in evaluating the real-time information we receive, our society is being manipulated into perspectives not of our own choosing, but the choosing of others.  We are losing the ability to process the information we receive and find ourselves increasingly caught in echo-chambers only presenting information supporting potentially incorrect world-views.

The Internet was never intended to be a replacement for independent knowledge.  It was developed to expand our ability to access information in the pursuit of developing knowledge and capability.  Not only is the idea that access to knowledge equates to having knowledge founded on shaky foundations lacking any kind of empirical basis, it undermines the actual development of knowledge.

 

 

Resources

Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science, 12(3), 83–87. http://doi.org/10.1111/1467-8721.01235

Ericsson, K. A. (2006). An introduction to Cambridge handbook of expertise and expert performance: Its development, organization, and content. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It : How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated. Journal of Personnality and Social Psychology, 77(6), 1121–1134. http://doi.org/10.1037/0022-3514.77.6.1121

 

Improving Multiple-Choice Assessments by Limiting Time

Standardized, multiple-choice assessments frequently come under fire because they test rote skills, rather than practical, real-world application.  Although this is a gross over-generalization failing to account for the cognitive-complexity the items (questions) are written to, standardized assessments are designed to evaluate what a person knows, not how well they can apply it.  If that were the end of the discussion, you could be forgiven in assuming standardized testing is poor at predicting real-world performance or differentiating between novices and more seasoned, experienced practitioners.  However, there is another component that, when added to standardized testing, can raise assessments to a higher level: time.  Time, or more precisely, control over the amount of time allowed to perform the exam, can be highly effective in differentiating between competence and non-competence.

The Science Bit

Research in the field of expertise and expert performance suggests experts not only have the capacity to know more, they also know in a way differently than non-experts; experts exhibit different mental models than novices (Feltovich, Prietula, & Ericsson, 2006).  Mental models represent how individuals organize and implement knowledge, instead of explicitly determining what that knowledge encompasses.  Novice practitioners start with mental models representing the most basic elements of the knowledge required within a domain, and their mental models gradually gain complexity and refinement as the novice gains practical experience applying those models in real world performance (Chase & Simon, 1973; Chi, Glaser, & Rees, 1982; Gogus, 2013; Insch, McIntyre, & Dawley, 2008; Schack, 2004).

While Chase and Simon (1973) first theorized that the way experts chunk and sequence information mediated their superior performance, Feltovich et al. (2006) suggested these changes facilitated experts processing more information faster and with less cognitive effort contributing to greater performance. Feltovich et al. (2006) noted this effect as one of the best-established characteristics of expertise and demonstrated in numerous knowledge domains including chess, bridge, electronics, physics problem solving, and medical applications.

For example, Chi et al. (1982) determined that the way novices and experts approach problem-solving in advanced physics was significantly different despite all subjects having the same actual knowledge necessary for the problem solution; novices focused on surface details while experts approached problems from a deeper, theoretical perspective.  Chi et al. also demonstrated the novice’s lack of experience and practical application contributed to errors in problem analysis requiring more time and effort to overcome. While the base knowledge of experts and novices may not differ significantly, experts appear to approach problem solving from a differentiated perspective allowing them more success in applying correct solutions the first time and recovering faster when initial solutions fail.

In that vein of thought, Gogus (2013) demonstrated that expert models were highly interconnected and complex in nature, representing how experience allowed experts the application of greater amounts of knowledge in problem solving.  The ability for applying existing knowledge with greater efficiency augments the difference in problem-solving strategy demonstrated by Chi et al. (1982).  Whereas novices apply problem-solving approaches linearly one at a time, experts evaluate multiple approaches simultaneously in determining the most appropriate course of action.

Achieving expertise is, therefore, not simply a matter of accumulating knowledge and skills, but a complex transformation of the way experts implement that knowledge and skill (Feltovich et al., 2006). This distinction provides clues into better implementing assessments to differentiate between expert and novice: the time it takes to complete an assessment.

Cool Real-World Example Using Football (Sorry. Soccer)

In an interesting twist on typical mental model assessment studies, Lex, Essig, Knoblauch, and Schack (2015) asked novice and experienced soccer players to quickly and accurately decide the best choice of tactics (either “a” or “b”) given a video image of a simulated game situation.  Lex et al. used eye-tracking systems to measure how the participants reviewed the image, as well as measuring their accuracy and response time.  As one would expect, the more experienced players were both more accurate in their responses, as well as quicker. Somewhat surprising was the reason experienced players performed faster.

While Lex et al. (2015) determined both sets of players fixated on individual pixels in the image for nearly the same amount of time, experienced players had less fixations and observed less pixels overall.   Less experienced players needed to review more of the image before deciding, and were still more likely to make incorrect decisions.  On the other hand, more experienced players, although not perfect, made more accurate decisions based on less information.  The difference in performance was not attributable to differences in basic understanding of tactics or playing soccer, but the ability of experienced players to make better decisions with less information and taking less time.

The Takeaway

Multiple-choice, standardized assessments are principally designed to differentiate what people know, with limited ability to differentiate how well they can apply that knowledge in the real world.  Yet, it is also well-established that competent performers have numerous advantages leading to better performance in less time.    If time constraints are actively and responsibly constructed as an integral component of these assessments, they may well achieve better predictive performance; they could do a much better job of evaluating not just what someone knows, but how well they can apply it.

 

References

Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In Visual Information Processing (pp. 215–281). New York, NY: Academic Press, Inc. http://doi.org/10.1016/B978-0-12-170150-5.50011-1

Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7–75). Hillsdale: Lawrence Erlbaum Associates.

Feltovich, P. J., Prietula, M. J., & Ericsson, K. A. (2006). Studies of expertise from psychological perspectives. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Gogus, A. (2013). Evaluating mental models in mathematics: A comparison of methods. Educational Technology Research and Development, 61(2), 171–195. http://doi.org/10.1007/s11423-012-9281-2

Insch, G. S., McIntyre, N., & Dawley, D. (2008). Tacit Knowledge: A Refinement and Empirical Test of the Academic Tacit Knowledge Scale. The Journal of Psychology, 142(6), 561–579. http://doi.org/10.3200/jrlp.142.6.561-580

Lex, H., Essig, K., Knoblauch, A., & Schack, T. (2015). Cognitive Representations and Cognitive Processing of Team-Specific Tactics in Soccer. PLoS ONE, 10(2), 1–19. http://doi.org/10.1371/journal.pone.0118219

Schack, T. (2004). Knowledge and performance in action. Journal of Knowledge Management, 8(4), 38–53. http://doi.org/10.1108/13673270410548478

Misconceptions about Certification

There seems to be wide misconceptions about what “Certification” and “Licensure” are all about.  Some see it as just the final part of an educational regimen.  Other’s see it as some kind of hurdle imposed by greedy organizations restricting access to some benefit.  These misconceptions and skewed perspectives, lead them to make demands affecting the very heart of what certification is all about and minimizing the value of the very certification they are working to achieve. First:

What Certification IS …

Certification is a legally defined classification stating a certifying body stands behind the capability of a certified individual to perform at some specific level of capability.  Certification is a very simple construct: it is the definition of a standard of performance, and a program assessing individuals to that standard.  That’s it, nothing more and nothing less.

Setting a standard is the first part of any certification.  Ideally, this standard is defined with help of people who perform the job.  In addition, a standard implies that everyone, no matter whether they have done the actual job for decades or participated in the development of the standard itself, must objectively demonstrate their capability in exactly the same manner.  Unless this standard is objectively applied equally to all individuals, it is not a standard.  Maintaining that standard is the foundation of certification value; it is what the certification stands for and how it should be used.

Creating an assessment of that standard is the second part of any certification.  This not only includes a mechanism assessing current capability, but also a program or process to ensure future capability.   Despite what many believe, creating an assessment is not a simple, ad hoc process where someone creates an assessment (test) and makes people take it to prove their ability.  It is, in fact, a highly rigours process backed by decades of scientific research on measuring cognitive ability (psychometrics), the sole purpose of which is ensuring the validity of the decisions made based on assessment performance.  It involves designing the assessment, job-task analysis of the job being assessed, formalized standards of item (question) writing, public beta-testing of items, psychometric evaluation of the item performance, assessment construction, and evaluation of appropriate scoring.  Done properly, it is time-consuming and costly (which is why some organizations don’t do it properly).

Finally, because knowledge and competence are perishable commodities, mechanisms must be put in place ensuring certified individuals remain capable in the future.  This is frequently done by limiting the length of time a certification is valid and requiring periodic re-certification (re-validation) of the individuals capability.  Other methods may include proving continued education and practice of the knowledge.  Regardless, the ongoing evaluation of certified individuals must adhere to the original standard with the same validity, or the standard no longer has value.

There is no hidden agenda to certification.  There is no conspiracy.  It is simply to establish a standard and assess individuals compared to that standard.

Misconceptions about Certification

Really, any belief beyond the design of a standard and the assessment to that standard, is a misconception about certification.  However, there are a number of misconceptions that often drive changes detrimental to the rationale of certification.  The most common ones relate to understanding what an assessment is for, training, and re-certification requirements.

Most people mistakenly assume the items within an assessment must represent the end-all-be-all of what someone should know.  They don’t understand that a psychometric assessment is not about the answers themselves, but the inferences we can make about performance based on those answers.  There is no way to develop an exhaustive exam of all the knowledge necessary to be competent and to deliver that exam efficiently.  However, we can survey a person’s knowledge and through statistical analysis infer whether they have all the knowledge necessary or not.  The answer to any specific question is less important than how answering that question correlates with competence.  Even a poorly written, incomplete, and inaccurate item can give us information about real world performance; in fact, evaluating how a candidate responds to such an item can be highly informative (although this is not a standard, intentional practice).  This focus on the items themselves, rather than the knowledge and competence the answers suggest, is what makes people incorrectly question the validity of the certification.

Similarly, many people think a certification should be able to be specifically taught.  As such, they believe a training course should be all that is necessary to achieve a certification.  However, this does not align with what we know about the development of human competence.  There is a big difference between knowing something, and knowing how to apply that knowledge competently.  Certification is an assessment of performance, not knowledge; and, as such, cannot be taught directly.  If someone can take a class and immediately achieve certification, either: A) the assessment does not evaluate actual performance; or, B) the course simply teaches the answers to the questions on the exam, rather than the full domain of knowledge.  In either case, you have biased the inferences made by the assessment.  Competence begins with knowledge, but must also have experience and practice.  This cannot be gained through a class, but only through concerted effort; you cannot buy competence.

Finally, many people also believe that once a certification has been achieved, it shouldn’t need to ever be evaluated again; or, that taking a course instead of an assessment should suffice.  The former belief simply ignores the fact performance capability is a perishable commodity: if you don’t use it, you lose it.  The latter once again confuses knowledge with performance.  How frequently this needs to happen, or whether continued education is sufficient to demonstrate continued performance is entirely dependent on the knowledge domain the certification attempts to assess.  In highly dynamic environments, this may need to be done much more frequently and rely more on assessments than in other domains; however, ongoing evidence of continued capability is a must if standards are to be maintained.

Leave Certification to the Professionals

The heart of the problem is that everyone seems to believe they are experts in the design of certification programs and assessments simply because they have participated in them.  The reality is that certification is a rigorous, research-based, and scientific endeavor.  The minimum requirement to be considered a psychometrician is a PhD; that’s a great deal of specialized knowledge most people do not have.  The decisions made are not arbitrary, nor are they made with the intention of anything other than maintaining the standard and making valid assessments of individuals according to that standard.

At the end of the day, the value of a certification is whether the people who achieve it can perform according to the standard the certification set forth.  If the certification cannot guarantee that, then it is not valid and has no value.  However, this requires people to actually understand what that standard is, what it means, and why it was created.   It requires people to accept there is a rigorous process accounting for all of the decisions and those decisions all support validity.  Finally, it requires people to understand that just because they may be experts in their field, they are not experts in certification.

 

 

 

 

 

Three Reasons Certification is Better than College.

Re-post from LinkedIn – April 8, 2016

Much of the national debate around the value of a college education seems to revolve around the cost of college, particularly in finding ways of making college more affordable and accessible to more people. This frame of reference assumes that a college education is the only means of post-secondary training and education that has value; this is simply not true. According to an analysis of employment during the Great Recession, 4 out of every 5 jobs lost were held by those without any formal education beyond high school; those without post-secondary training were more than three times as likely to lose there jobs than those with even “some college” (Carnevale, Jayasundera, & Cheah, 2012). This, by itself, suggests that even minimal post-secondary training can garner significant benefit; having a job is preferable to not having one. One way to accomplish this is through industry-based certifications (IBCs).

IBCs offer a number of advantages for improving an individual’s employability over simply making college more affordable and accessible. First, in regards to affordability and accessibility, IBCs offer greater return on investment (ROI) than traditional college education. In part because of their far lower time and money commitment, IBCs also provide a more flexible solution either to replace college, or to aid in preparation for later college education. Finally, the dynamic nature and industry relevancy of IBCs provide stronger signals to employers concerning an individual’s ability to actually do the job they need today, rather than months or years down the road. The strong case for IBCs begins with simple economics.

Direct ROI

Given the focus on ROI of post-secondary education, a comparison of the ROI of Certification Earnings Premiumcertification versus a Bachelor’s degree seems relevant. According to the U.S. Census Bureau (Ewert & Kominski, 2014), with the exception of a Master’s degree, there is an earnings premium for achieving certification or licensure regardless of education level. This premium predominantly benefits those with less post-secondary educational investment (see Figure 1). While it is true the earnings premium for having a Bachelor’s degree is much greater (see Figure 2), this is not a measure of return on investment. Return on investment is a measure of what you get for what you put in; i.e. ROI is the amount you can expect to get back for every dollar spent. This is where the ROI of certification is substantially better.

Assuming a cost of $40,000 for a Bachelor’s degree (probably a low estimate) and a cost Education Earnings Premiumof $5,000 to achieve certification (probably a high estimate), the ROI of achieving certification for someone with only a high school education is 2.3 times that of achieving a Bachelor’s degree (see Figure 3). Furthermore, this is just a starting point as it doesn’t account for differences in earning while those achieving a Bachelor’s degree remain in school or the cost of interest on student loans for college tuition. The fact is, certification provides individuals an extremely efficient mechanism to improve their earnings potential, and achieve the post-secondary credentials that improve their ability to get and keep a ROI of Certificationjob, even during tough economic times. The fact that IBCs add value to both those without other post-secondary education as well as those with, also demonstrates the greater flexibility of certifications.

Flexibility

One of the challenges to simply making a college education more affordable (or free), is that cost is not the sole factor contributing to non-participation or non-completion. An analysis of college completion statistics showed a 7% difference in achieving a Bachelor’s degree between students who complete high school with a 3.0 GPA versus those with a 3.5 GPA (Rose, 2013). Rose also reported that family and work responsibilities significantly affect the chances of completing a degree program. In other words, while the cost of a college education might inhibit individuals from starting a degree program, individual preparedness and the time commitment necessary to complete a degree are significant contributors to whether individuals ever actually graduate and garner the benefits. This is likely to be particularly true for low-income or disadvantaged students. IBCs provide more flexibility to address these challenges.

Firstly, IBCs have significantly lower time commitments associated with their completion, making it that much easier for students who must also maintain family and work obligations to complete the requirements for certification. Many IBCs do not even require formalized classes or specific training, allowing individuals to self-study as they are capable or as life permits. Finally, most IBCs award credentials, not based on having completed a regimented program of study, but upon the passing of competency-based exams. This means that students can take as much time, and as many attempts, as necessary without suffering negative consequences; it is not a one-time deal, thus providing greater chances of ultimate success. This applies to both those without any other post-secondary training as well as those with degrees who are simply looking for additional earnings potential.

Secondly, IBCs may provide students not ready for a formal degree program with the knowledge and skill to prepare them for a future degree. IBCs can provide students with exposure to a field of study without the time and financial commitment associated with a formal degree program, reducing the costs associated with choosing a career they ultimate find unsatisfactory or unfulfilling. In addition, this additional knowledge and skill may give students the confidence and ability necessary to complete degrees they would otherwise have been unprepared for.

Not everyone has the time, or capability to commit to formal degree programs. This has a much larger effect on educational outcomes than the cost; simply reducing the cost or providing universal access does not address either of these challenges. IBCs fill a gap between the demands of formalized postsecondary training, and the real world needs of students just trying to stay a head in a highly competitive marketplace while simultaneously making ends meet (Claman, 2012). In addition, IBCs are increasingly more valuable to employers.

Stronger Employability Signals

“The value of paper degrees lies in a common agreement to accept them as a proxy for competence and status, and that agreement is less rock solid than the higher education establishment would like to believe” (Staton, 2014, para. 3).

Despite the nearly $800 billion dollars spent each year in the United States for human capital development beyond primary and secondary education, nearly 70% takes place outside of four-year colleges and universities; of that, U.S. employers spent almost $460 billion on formal and informal employ training alone (Carnevale, Jayasundera, & Hanson, 2012). According to the Economist, only 39% of hiring managers feel college graduates are ready to be productive members of the workforce (“Higher education: Is college worth it?,” 2014).  The Economist further points out the skill gap between college degreed applicants and the needs of employers has left 4 million jobs unfilled. It is no wonder employers are beginning to question whether degrees are appropriate proxies for real world competence; and, some are even seeing advance degrees as a negative hiring signal requiring more cost with little benefit (Staton, 2015).

“The world no longer cares about what you know; they world only cares about what you can do with what you know” (Tony Wagner as quoted by Friedman, 2012, para. 11).

The hands-on, competency-based aspects of IBCs not only create value for individuals directly, but indirectly by providing stronger signals to employers about the actual competence of job candidates. The dynamic and flexible nature of IBCs make them a better reflection of current industry standards and competence even in rapidly changing industries (Carnevale, Jayasundera, & Hanson, 2012). Perhaps even more important, the standards and competency-based testing utilized in IBCs improves the ability to objectively compare applicants, something that has proven extremely unreliable for post-secondary metrics like GPA (Carnevale, Jayasundera, & Hanson, 2012; Swift, Moore, Sharek, & Gino, 2013). IBCs provide employers with highly credible evidence of applicant’s ability to actually do something with their knowledge, not just their ability to know something.

IBCs are increasingly embraced by employers as a more reliable and valid indicator of candidate competence and questioning the value of traditional post-secondary indicators (Carnevale & Hanson, 2015). Because IBCs are, by definition, industry-based, applicants holding IBCs are more likely to have relevant, up-to-date skills meeting national, or international standards. IBCs are not only easier to evaluate, but also provide strong indicators that a prospective applicant will not need additional employer-based training before becoming productive. This is likely why even holders of advanced professional degrees are paid premiums for also having IBCs (Figure 1).

Conclusion

The debate about the current state of education in the United States is a worthwhile discussion, perhaps even a critical discussion in light of the challenges facing us. The problem is the single means of post-secondary education (four-year degrees) that dominates the debate and a singular focus on the cost of educating to this level. This debate fails to account for the many other factors affecting student outcomes, and the actual needs of employers. The reality is that advanced economies are not dominated by high-volume, low-value production, but low-volume, high-value production (Friedman, 2012), and the demand for “middle-education” jobs is growing and will continue to grow for many years (Carnevale, Jayasundera, & Hanson, 2012).   Without addressing these realities, we are only perpetuating a divide between those with degrees and those without, while still failing to meet the needs of business. There will always be a need for formal degrees, but that does not make them the panacea for all people and for all jobs.

At the end of the day, credentialing is an attractive option for anyone looking to improve their employment options.  IBCs provide a greater ROI, in a shorter amount of time than formal degrees.  The flexibility and less structured design of IBCs  make them easier to obtain successfully, especially for students either unprepared for, or unable to commit to formal programs.  Furthermore, IBCs provide strong employment signals to potential employers about the individuals ability to contribute on day-one of employment.  In many cases, the ROI, the flexibility, and the strong employment signals attributed to IBCs may very well be a better option than college; in other cases, IBCs may be an essential stepping-stone to that first degree by providing the skills, and the additional income, necessary to commit to obtaining a formal degree.  AND, if you already have a bachelor’s degree, these same benefits await you compared to getting a graduate degree.  Certification may very well be better than college to many.

NOTE: Anyone interested in exploring how competency-based credentialing is a critical component of the future of higher education should investigate WorkCred (http://www.workcred.org/), a non-profit organization working to elevate the visibility of credentialing as an essential ingredient in the future of human capital development in the 21st century. The author is not affiliated with WordCred.

References

Carnevale, A. P., & Hanson, A. R. (2015). Learn & earn: Career pathways for youth in the 21st century. E-Journal of International and Comparative Labour Studies, 4(1). Retrieved from https://cew.georgetown.edu

Carnevale, A. P., Jayasundera, T., & Cheah, B. (2012). The college advantage: Weathering the economic storm. Retrieved from https://cew.georgetown.edu/

Carnevale, A. P., Jayasundera, T., & Hanson, A. R. (2012). Career and Technical Education: Five Ways that Pay. Retrieved from https://cew.georgetown.edu/

Claman, P. (2012). The skills gap that’s slowing down your career. Harvard Business Review. Retrieved from http://hbr.org

Ewert, S., & Kominski, R. (2014). Measuring Alternative Educational Credentials: 2012, (January), 14. Retrieved from https://www.census.gov/

Friedman, T. L. (2012, November 17). If You’ve Got the Skills, She’s Got the Job. The New York Times. New York, NY. Retrieved from http://www.nytimes.com/

Higher education: Is college worth it? (2014, April). The Economist. doi:Article

Rose, S. J. (2013). The Value of a college degree. Retrieved from http://cew.georgetown.edu/

Staton, M. (2014). The degree is doomed. Harvard Business Review. Retrieved from https://hbr.org/

Staton, M. (2015). When a fancy degree scares employers away. Harvard Business Review. Retrieved from http://hbr.org/

Swift, S. A., Moore, D. A., Sharek, Z. S., & Gino, F. (2013). Inflated applicants: Attribution errors in performance evaluation by professionals. PLoS One, 8(7). doi:http://dx.doi.org/10.1371/journal.pone.0069258