Perspective

Fact-based, but personal opinion/perspective

How to Put the “Strategic” Back Into Strategy

Today, we’re going to explore your high school algebra recollection.  In the 1996 Harvard Business Review article What is Strategy, business visionary Michael Porter presented the preeminent definition of strategy. In short, Porter defined two ways in which organizations compete: on operational efficiency and on strategy. Broadly defined, operational efficiency is doing the same things everyone else does, only doing it better.  This is the basis for many business practices around process improvement, total quality, as well as for the implementation of business software to optimize and improve the efficiency of the organization. Conversely, strategy is the intentional decision to do things differently, or to do different things to create value. 

If we want to be strategically competitive we need to do something different than anyone else. Doing something different means becoming innovative. Here’s where your algebra prowess comes into play: if Strategy = Different; and, Different = Innovation; by the transitive property, Strategy = Innovation. See? Algebra can be useful!!!

Why is this equation important? There are any number of reasons why this distinction is important, but the most significant is likely how it relates to what most businesses call their “strategy”. By this definition most organizations have an operational plan, rather than a strategy; i.e., their “strategy” is not strategic.

So, how can you put strategy back into your plan? First, you have to have a plan, so let’s start there.

Getting Started

Before we get too far down the road, we need to understand that building a strategic plan begins with important precursors. First, you must have a mission and vision of an end state, which includes both a destination, as well as a reason. In addition, you must also have some values, or guidelines that help you choose how you are going to make that journey and the path to getting there. These not only serve as the initial vector of your journey, but also because they start to limit/influence the decisions you make. You don’t need to consider anything that doesn’t take you towards your goal; and, the “why” will also help you make choices about which paths should and shouldn’t be followed.

When I mentioned “mission and vision”, you probably had an immediate reaction and thought of the many organizational “mission” and “vision” statements you’ve either been part of developing or had shoved down your throat from on-high. Frankly, I can’t blame you. I have a particular aversion to mission statements because, at least in private-sector, for-profit companies, it is largely a bunch of hogwash. Their mission is to make money not only for the organization, but also for the shareholders, and employees. Anything beyond that is disingenuous at best; and, there’s nothing bad or wrong about that. In this case, however, I want you to think of mission and vision simply as “where are we going” and “why are we going there”.

Let’s imagine you are planning a trip to, say, California because you want to see all the movie stars. You now have a mission and vision; you know where you want to go and why you want to go there. You no longer need to consider anything that takes you to Denver. In addition, if your “why” is to see movie stars, your plan doesn’t need to include anything about seeing the redwoods or going to Disneyland. Your goal is constrained to a specific destination with a specific intent.

For business planning, this is more likely going to be things like “expand into new markets to increase our addressable market”, or “drive sales of product x to achieve 50% of total revenues”. Ultimately, these boil down to “make more money so our shareholders are happy”, but you need something a bit more specific to build a plan. They don’t have to be grandiose, but they do need to have specific targets and a rationale for why you want to achieve them.

Now, let’s talk about values. Again, you probably have a really bad taste in your mouth and instantly thought of things like “honesty”, “integrity”, and how they contribute to those meaningless mission and vision statements. Those are certainly what most people consider “values”; however, in relation to strategy, it might be easier to consider values as “ways of working” or “guidelines” to how you want to achieve your mission and vision. They help define what you will and will not consider as part of your plan.

Returning to our trip to California, we could imagine that our values might include “don’t like to fly”, “only want to stay at hotels less than $100/night”, or any number of parameters that will constrain your choices. If you don’t like to fly, then your plan will likely include buses, trains, or simply driving. If you only want to stay cheaper hotels, you may have to find more remote lodgings on secondary roads, affecting both your travel time and access to other services.

For business, yes, things like “honesty” and “integrity” are certainly values that contribute to ways of working and would limit your choices; however, so are things like “sustainability”, or “commitment to Open-Source software”, or any number of things that further provide guideposts to the choices you make. Not stating these values can not only lead to a lot of churn and difficulty, it can also lead to decision making that hurts the organization. Be clear about the values you will not sacrifice in your plan. You may even want to stack-rank these values based on your commitment to them as it may help later when we get to strategy.

We now know where we are going, why we want to go there, and have some basic constraints on what we are willing (and not willing) to do to get there. The next step is to collect all the relevant data you need to construct a plan.

Collecting Your Data

Now that you know where you want to go, why you want to go there, and know what things you will and will not do to get there, you must collect your data. You must estimate what you think it will take to get to your destination, inventory what you have to get you there, and figure out what you may need to acquire.

For our trip to California, this part of the planning will involve budgeting, determining if we have a vehicle we trust to drive, estimating how much time the trip will take, and how much time we have to make it. What would it cost to drive, versus take the bus, or fly? How much time does each of those require? Will you need hotels, food, etc. along the way and for how many days? How much vacation time do we have? Who else is going on the trip, and do they have any different values than you do that must be considered? Basically, you need all the facts to help determine the best choices based on your situation.

This is a little more complicated for business planning, but the idea is the same. In a business plan, your journey involves getting others to take the journey with you, so the first thing you must do is understand who your customers are (customer segmentation), and what makes each segment unique; e.g. market research about the others going on the journey with you. Then you should also look at what competitive organizations with similar visions are offering. Are you looking to expand into new markets? Should you build or buy? You should make an inventory of the competencies you currently have, as well as what you don’t have or need (either in equipment, people, etc.). All of this is just data at this point, collecting the pieces that will be part of your plan and giving you the data necessary to make good decisions.

Building a Plan

Now that you know where you want to go, why you want to go there, a list of value parameters and all of the data necessary to make your plan, you need to lay it all out.

For example, with our California trip, let’s say we’ve determined that our vehicle is not reliable enough and we must look at other forms of travel like bus or train. Because buses only follow specific routes, and trains have even more limited routes, this decision greatly reduces our choices. If this limitation is too onerous, then we have to look at other alternatives, like renting a more reliable vehicle which restores our route choice, but at a likely higher cost than the others. Typically, we will come up with multiple plans that have various time commitments, costs, and other advantages or disadvantages. Choosing any one plan over the other comes down to which one makes the most sense for our situation.

Building an operational plan is similarly an iterative process starting with the basic parameters (where are we going, why are we going there, and what constraints are applied to our trip based on our values). Then you have to model each plan, with each choice further refining and limiting your next choices. For me, this is one of the most exciting things about planning as it is like a logic puzzle where you have to keep trying a piece at a time until you can “solve” it; unlike most puzzles, though, there may be numerous solutions. This is often where balanced scorecards can help you score each plan based on how well each plan achieves your goals in terms of time, cost, effect, etc.

One challenge when going through this process is the tendency for many organizations to plan themselves into a corner. There is the old military quote saying “no plan survives contact with the enemy”. Having multiple potential solutions is a bonus because your plan should also account for potential risks, and risk mediation. I know of a friend who rented a car for a long distance trip because they didn’t trust their own, only to have the rental car break down during the trip. Having multiple options is a valid means managing risks. So, when you decide to choose your plan, don’t throw out the other plans as much of that work could be useful down the road. Another point here is that a plan shouldn’t be so specific as to further limit how you achieve your goal. There are significant differences between saying you are going to get to California by “public transit”, “by Bus”, and “by Greyhound”; each level of clarification limits your flexibility, and your options. Your plan should be more about direction/guidance and less about the specific actions you will take.

Taking all this into account, most organizations make a decision about which plan will be their “strategy” and they celebrate a job well done. But … is it?

Making your Plan Strategic

You’ve created multiple plans and have debated which one makes the most sense based on your needs, but is it a strategic plan or is it simply an operational plan of action. Going back to our original equation Strategy = Innovation, is your plan unique and/or differentiated from what any other organization or competitors might do? Is it innovative, or is it simply a plan for operational efficiency?

Going back to our California trip, let’s say that instead of just a trip to California to seek out movie stars, it is actually a competition. The winner is the one that meets the most movie stars within a given time frame. How is your plan different from all the other competitors in the contest? If it isn’t different, then it isn’t strategic; it is simply an operational plan and the only way to beat the others is to do it faster and cheaper; i.e. through operational efficiency. If we look at the “rules” of the competition, it doesn’t mention anything about California, just the number of movie stars you can meet. Did you base your plan on a false premise? It may make sense, but it would also make sense to all other competitors. Perhaps there is someplace different you could go to meet movie stars; maybe you can find a way to get movie stars to come to you. Instead of traveling via traditional means, could you make the trip using unconventional means (like roller skates?) to create publicity about your mission and its goal, getting movie stars to support your goal and seek you out? This is also a good place to look at the elements of discarded plans, and scrutinize whether you artificially limited yourself by being too specific in your planning. Did your plan call to travel on Greyhound buses because that was the only form of public transport that you could think of? Are there, perhaps, less well known or less conventional means of public transit that would prove more innovative in solving your challenge? Creating a strategy will test all of your plan, right down to the original mission and vision.

The same holds true for a business plan. Far too often, a company’s “strategic” plan is to do exactly the same thing as everyone else, and their competitive analysis is solely an examination of what others are having success doing. Adopting industry best practices is certainly a plan, but it isn’t innovative; it isn’t differentiating; and, it isn’t strategy. It is the highest expression of operation efficiency. To make it strategic, your plan must be different than everyone else’s in a fundamental way. It requires a full examination of the entire plan, including the original mission/vision, the values you believed constrained your decisions, and the actions you plan to undertake. Was our mission/vision and rationale innovative? Were your values truly things you believed in, or were they artifacts of the status quo? When you examined the possible paths to achieving our goal, did you just look at the “normal” options (cars, buses, trains) or did you explore the unexpected or unusual? Did you over plan and try to dictate specific actions artificially limiting your options? If you can’t answer these questions, or your answers default to the status quo, then your plan is unlikely to be strategic.

If you want your plan to be strategic, you have to put in the extra, final work. If you want to win, be it in a hypothetical contest to meet movie stars, or in the real world of business, you have to be strategic and to be truly strategic, you need an innovative approach.

Final Thoughts

Still don’t think algebra is fun or useful? Try this: if Strategy = Innovation, then Innovation = Strategy (the property of symmetry). That is to say, there is a simpler way to be more strategic: be innovative. Don’t get me wrong, I absolutely love a good strategy planning workshop, balanced scorecard, SWOT and PEST analysis, etc; however, I’ve found that it is frequently much easier to do less planning than more. You still need to know where you are going and why. You still need some basic ways of working (values) to guide you and (for business) you should probably have a decent idea of who your customers are (or could be) and what they value. Anything beyond that tends to take a lot more time and creates a lot less value.

Instead, once you have the basics down, I just focus on creating innovation. I ask my teams to treat everything as an experiment. Everyone comes up with ideas they believe will get us closer to our goal, and proposes an action they believe will get us there. They have to have a specific action, define what they expect the action to accomplish, and a means of measuring whether it was accomplished. We evaluate the proposals, implement as many as we can, measure their success (or, often, failure). Things that fail (or can’t be fully determined) we stop doing; things that succeed, we do more of. While not all of these ideas are necessarily truly innovative, the approach keeps us thinking about rejecting the status quo and constantly evolving. By focusing on being effective innovators, we inherently become strategic.

The Versatilist on Dunning-Kruger

You may not be familiar with the “Dunning-Kruger Effect”; or, you may have only heard the colloquial explanation that “stupid people are too stupid to know they’re stupid”, most humorously explained by Monty Python alum John Cleese here.

In reality, if you read the Wikipedia entry or the actual research, what Dunning and Kruger discovered is that human beings have the tendency to rate our ability, in almost anything, to be at, or slightly above, the average of all people. This is not only an obvious impossibility, it also has some interesting ramifications.

The first, oft repeated, implication is that those with the least capability tend to overestimate their capability the most. That is to say, if we assume 50% is the “average ability” across the population, those with 0% actual capability will overestimate their ability by 50%, (or more) while those with 40% only overestimate by 10%. One explanation for this is that the skills necessary to evaluate capability are exactly the same skills necessary to have the capability; i.e., if you don’t know what you are doing, it is difficult to evaluate that you, or someone else, is doing it wrong. My favorite example of this would be something like English grammar or punctuation: if you don’t have a firm grasp of it, it is impossible for you to evaluate how well you, or someone else, is performing. You must know, in order to evaluate. This is where the “too stupid to know” comes from.

The second, much less discussed, implication is that those with the most capability tend to underestimate their knowledge and competence. Back to the 50% scale, if someone actually performs at the 80 or 90% level, they tend to severely underestimate their performance. This is frequently cited as a contributing factor to imposter syndrome, where those with superior capability don’t necessarily believe they are superior. I attribute this to the colloquial definition of an expert as someone who knows more and more, about less and less (purportedly coined by one of the Drs’ Mayo of Mayo Clinic fame). An extension of this says that an expert is someone who knows more and more, about less and less, until they know absolutely everything about nothing. While this was likely meant to be more humorous than anything, there is a certain kind of meta, philosophical element to it as the process of discovering more and more about an increasingly smaller area of expertise also has the tendency to make it obvious how little you really know about anything else. Experts, while becoming more knowledgeable about their area of expertise, become increasingly cognizant of how little they really know elsewhere.

In either of these situations, overestimating or underestimating, the challenge is that self-reported capability is a very poor predictor of actual ability; and, if you really need an expert because you aren’t one, it is very unlikely you will be able to determine if someone else is one or not.

Hedging Your Bets

Why am I going on about the Dunning-Kruger effect? I point out this well-known characteristic because it touches on my area of expertise … determining the best way to assess expertise, particularly when it comes to augmenting your organization’s capabilities; i.e. this is something we need to think about when we hire people. We need to take this into account and develop strategies to “hedge our bets”.

While resumes are useful, we all know that just because you’ve done something in the past, doesn’t mean you are actually any good at it; and, resumes, although not necessarily outright false, are generally over inflated. Some of this is smart marketing on behalf of the candidate, but some may very well be that the candidate actually believes they are more adept than they are. On the flip side, that expert you’re looking for may be a lot less comfortable touting expertise they don’t feel they actually have. Resumes and interviews are useful, but woefully inadequate and imprecise.

One way to address this is to ensure that the screening/interview process involves some kind of valid psychometric assessment of ability (like respected certifications and licensure) and/or the direct involvement of someone who you know has the appropriate skills to assess the candidate’s ability (if you can find one). You can’t rely on self-reported capability, and you can’t expect someone without that capability to evaluate candidate’s capability … even in the screening process.

Another, perhaps easier, way to hedge your bets is to broaden your horizons. When we post job opportunities, we frequently over estimate the skills required, producing a “wish list” that values “specific” experience over diversity of experience (as I’ve discussed here: Would You Hire Me?). However, if we limit ourselves to one dimension, it can be hard to determine what a candidate’s true capabilities are. If, instead, we look for people who have been successful or demonstrate knowledge of multiple domains, backed by work experience, we may get a better estimation of their knowledge on specific domains. That is to say, a Versatilist, with a broader set of knowledge in multiple domains, is more likely to underestimate their specific domain knowledge than overestimate it. If this doesn’t cause you to overlook these candidates, the only downside is that you may get more than you knew, not less.

Don’t be too stupid to know you’re stupid

The Dunning-Kruger effect is just another factor hindering employers from finding the best people. We all think we are better at everything, including evaluating prospective employees, than we generally are; and, the very people we want are likely to be overlooked because they undersell their capabilities. Using other, valid qualitative criteria like certifications certainly helps, and including experts, instead of AI engines and unqualified HR personnel, in the screening and interview process would also be beneficial.

For my money, until I find a way to fund continued research into better ways, I’ll continue to look for those Versatilists out there who have knowledge and experience, and likely undervalue their true capability.

Standardized Testing in Context of Diversity, Equity, and Inclusion: We need more, not less.

There was recently an article in the New York Times concerning the ongoing debate over standardized testing, specifically about the use of SAT and ACT testing in the college admissions process. The use of these tests has been debated for years, but during the pandemic, when in-person testing became impossible, many educational systems decided to remove this requirement and have simply not reinstated them. 

The point of the article was that, despite many concerns the exams themselves were biased in any number of ways, the use of standardized testing scores in institutions requiring them has actually increased the diversity of the student population (across all factors, race/sex/socioeconomic/etc.) over virtually any other means of admission standard.  In addition, the article points out that what many people see as bias in the tests themselves is likely misplaced: the tests accurately predict what they are intended to predict regardless of race and economics, namely, will the student do well in college or not. 

Herein lies, perhaps, one of the most misunderstood aspects of standardized testing … they can only reliably predict what they are intended to predict and nothing else. As a practicing academic who spends much of his days working on standardized testing programs in the technology industry, I am constantly confronted with these misconceptions. 

What are Standardized Tests?

The first thing to understand is what, exactly, standardized testing actually is.  In short, standardized tests are specifically built to predict some aspect of the individual taking the assessment. In the case of the ACT and SAT exams, they are designed to predict how well the individual will do in the university setting, and nothing more. In addition, by “predict”, I mean that they make a statistical inference, not an absolute determination, as they are based on statistical science which describes a “group”, not any one individual. They do not specifically measure real world capability. They do not measure overall intelligence. They only measure/predict what they are designed to do. 

Two key aspects of this are “Validity” and “Reliability”. Validity is a measure of how well an assessment does what it says it does. Does a high score on the exam actually predict what was intended, or more succinctly “are we measuring what we said we were”.  Reliability is a measure of whether the same individual, taking the same assessment, consistently scores the same without any other changes (like preparation, training, etc.); i.e., does the test make the same prediction every time it is used without any other factors affecting the results. 

Despite what critics say, the SAT and ACT exams have both been proven to be valid predictors of what they measure with high reliability.  My score will accurately (withing statistical deviations) predict my ability to be successful in college and my score will be fairly consistent across multiple attempts unless I do something to change my innate ability.  As the NYT article points out, this remains true: the higher you score on these exams, the better your academic results in post-secondary institutions.  The fact there is a significant discrepancy in scores based on race, socioeconomic situation, or any other factor is, frankly, irrelevant to the validity and reliability of the exam. Using the results of these exams in any context other than how they were designed is an invalid use.

The Legacy of Mistrust

These basic misunderstandings of standardized testing breeds mistrust and suspicion in what they do and how they are used. This is nothing new and likely stems for the development and use of assessments from the past. The original intelligence quotient (IQ) test developed around the turn of the 20th century is subject to the same issues, including suggestions of racial and socioeconomic bias. In part this is because the IQ test is not actually a valid predictor of intelligence or the ability to perform successfully, but like the SAT and ACT exams, research has showed it is a predictor of success in primary and secondary educational environments.  Unfortunately, this was not fully understood when the assessment was born and IQ has been misused in ways that actually have contributed to societal bias.  This is the legacy that still follows standardized testing.

It is bad design, the misuse of standardized testing results, and the misinterpretation of those results that causes such spirited debate.  In the case of the original IQ test, it was originally purported to determine innate intelligence, but was actually a predictor of primary/secondary educational success. Furthermore, research suggests that IQ is a poor predictor of virtually anything else, including an individual’s ability to succeed in life. This is a validity issue; meaning that it did not measure what it purported to measure. Due to the validity issue, IQ testing was then misused to further propagate racial and socioeconomic inequity, by suggesting that different races, or different classes were just “less intelligent” than others, prompting stereotypes and prejudice that simply wasn’t founded. 

Given this legacy, it is easy to understand why many mistrust standardized tests and believe they are the problem, rather than a symptom of a larger problem.

The Real Issue is NOT Standardized Testing

The conversation around standardized testing has suggested the reason for racial and socioeconomic disparity is due to bias within the testing. However, if we can accept standardized tests (at least ones that are well designed to have validity and reliability) simple make a prediction, and that the SAT and ACT, in particular, make accurate predictions of a student’s ability to succeed in post-secondary education, the real question is why is there a significant disparity in results based on race and socioeconomic background? Similarly, why did the original IQ testing accurately predict primary/secondary educational outcomes, but also suffer from the same disparity? The real question is: Why can’t students from diverse backgrounds equally succeed in our education system?

The answer is rather simple and voluminous SAT and ACT data clearly indicate this: there is racial and socioeconomic disparity built into the educational systems. This is a clear issue of systemic bias; your chances of success within the system are greatly affected by race and socioeconomic background. Either what we are teaching, or how we are evaluating performance, is not equitable to all students. This is the issue we should be having conversations about, research conducted, and action taken. Continuing the debate, or simply eliminating standardized testing, is not going to affect the bigger issue. If anything, eliminating SAT and ACT testing will help hide the issue because we will no longer have such clear, documented evidence of the disparity. I don’t want to start any conspiracy theories, but maybe this is one reason so few educational systems are willing to reinstate ACT and SAT testing as part of their admissions requirements, especially when the research suggests they are better criteria for improving diversity than other existing means. They may be imperfect, but it is not the assessment’s fault, it is the system’s fault.

How to Improve? 

First, I want to be clear: I don’t have any specific, research-based solutions. So, before I offer any suggestions based on my years of being in the educational system as a student, my years of raising children going through the educational system, and over a decade working with standardized test design and delivery, I want to emphasize that the best thing we can do to improve is simply to change the conversation away from the standardized tests and focus on the educational system itself. We need research to determine where the issue actually exists; is it what we teach, or how we measure performance?  That MUST be the first step.

That being said, when it comes to “how we measure performance”, based on my background, education, and experience, I’m going to make a radical suggestion: more standardized testing. I know, I know. Our students are already inundated with standardized testing, but hear me out. While standardized test are frequently used in our education system, they are rarely used to measure an individual student’s performance when it comes to grades (the ultimate indicator of success within the system), but as an assessment of the overall school’s performance. My suggestion is that these standardized tests may be a more equitable way to evaluate performance for the individual as well.

From an equity standpoint, while there are some proven correlations between individual test scores on the US National Assessment of Education Progress (NAEP) assessment and those individual’s ACT/SAT scores, the correlations were not perfect. In addition, correlations were weaker across racial/ethnic minorities and low-income students. NAEP scores have also shown positive correlation with post-secondary outcomes, although they were not the only factor. Finally, since the NAEP assessment began in 1990, the disparity in scores based on racial and socioeconomic differentiation has significantly diminished. This suggests the NAEP assessment may actually be better at determining the student’s capability, rather then just predicting their post-secondary success, while also having some ability to predict success. Yet, NAEP assessments are not used in any way to actually grade the student’s performance. At the very least, NAEP results may be a viable way to augment current admissions and similarly reduce the racial and socioeconomic disparity. They may also be a better way to measure “success” in the primary/secondary educational system than current methods, leading me to my next point.

The reality is that well-constructed, standardized assessments with proven validity and reliability are NOT how most of our students are evaluated today. Across the primary, secondary, post-secondary, and graduate levels, our students are routinely evaluated based on individual teacher developed assessments and/or subjective performance criteria. Those teachers are inadequately trained in how to design, make, and validate psychometrically sound assessments (with validity and reliability); and, as such, the instruments used to gauge student performance routinely do not meassure that performance. Without properly constructed assessments, our students are more likely to be measured on their English proficiency, cultural background, or simply whether they can decipher what the instructor was trying to say, rather than the knowledge they have about the topic. Subjective evaluations (like those used for essay responses) are routinely shown to be biased and rarely give credence to novel or innovative thought; even professional evaluators trained to remove bias, like those used in college admissions, routinely make systematic errors in evaluation.  Subjective assessments, in my personal and professional opinion, are fraught with inequity and bias that cannot be effectively eliminated. Furthermore, I can personally report that educational systems do not care, if the reaction to my numerous criticisms is any indication.  Standardized testing would address this issue and, as we’ve seen with the NAEP, likely do a much better job of making more equitable and fair performance assessments across students.

On top of that, our students’ performance is also often judged on things like home work, attendance, and work-products created in the process of learning, rather than on what they have learned or know. This misses the point, and likely exacerbates the disparity in “success” in our educational system. Single-parent and low-economic homes, which also tend to be more racial segmented, can have dramatic effects on these types of assessments. First, you are out-sourcing the learning to an environment you cannot control where some students may gain experiential knowledge growth, but others cannot; second, you compound that by further penalizing those who cannot with poor grades. While some students/parents (regardless of situation) may still engage in learning/experience outside the classroom, making it mandatory and grading on it likely contributes to the disparity giving those students in the best situations with an unfair advantage. Finally, from my own research into the development of expertise, I know that not all students require as much experiential learning to master the knowledge. The development of expert knowledge is idiosyncratic, some require more while some require less. As such, we should not be measuring performance on how the knowledge is obtained, and focus more on whether they have it or not. 

I know the legacy of mistrust will make this a hard stance for people to support, but the use of standardized testing for assessing student performance would address a number of significant issues in current practices. It can be less biased, provide more consistent results across schools, and if used in place of subjective or other non-performance criteria, be a more accurate reflection of student capability.

Conclusion

Standardized testing, especially the behind-the-scenes work done to properly create them, is a mystery to most people. When you add historical misuse and abuse of standardized testing, it is easy to see why many demonize them and question the results. The reality though, is that well constructed assessments, used properly, can not only help us uncover issues in society, but also help us address those issues. The data on SAT/ACT scores, both their ability to predict academic performance, as well as the disparity in scores across racial and socioeconomic background are a clear signal to the real problem: the racial and socioeconomic bias built into the education system. The education systems definition of “success”, or how it is determined is clearly biased. As such, we should not push to eliminate standardized testing, but look to see how we can improve our definition and measurement of success by doubling down on standardized testing instead of how we do it today.

Would You Hire Me?

An increasingly heard complaint in the business world today is the “tightness” of the talent pool.  Organizations are routinely reporting they simply cannot find job candidates with the appropriate skills to fill the jobs they have.  Although I have no data to counter this assessment, I do suspect the way businesses select and hire employees is a more significant factor to their belief that “the right people aren’t out there”.  Not only that, but these same biases are also likely a factor in the lack of diversity within many roles in corporate America.  Let me explain . . .

Job Posting Bias

The most significant factor contributing to the belief that there is a lack of talent, is the innate bias in the hiring process starting with the job posting.  By this, I mean organizations go to great lengths to detail and explicate the exact skills, knowledge, and experience they require to even be considered for a position.   However, this list of requirements can be flawed in a number of ways.

First, the list of skills believed to be necessary to fulfill the role are normally based, not on explicit job-task analysis or empirical study, but the beliefs and assumptions of those already in the job.  Although these “experts” are likely to have valuable insights, they are biased in several ways.  First, and foremost, experts are notoriously unable to accurately identify the characteristics necessary for their own success, let alone someone else’s.  Expertise is derived through the application of tacit knowledge, which by its very definition cannot be articulated. Second, successful practitioners may be biased towards the skills that helped them be successful, while discounting skills and knowledge they do not have. Finally, this list of skills often becomes an ultimate “wish list”, rather than minimum requirements presenting a list of capabilities far beyond what many people would achieve in a lifetime.  This has been shown to be a barrier for applicants, particularly for women and minorities who are less cavalier in representing or exaggerating their past experience compared to white males.    In other words, the jobs being posted are often skewed towards maintaining the status quo, leading to another job posting bias.

The second bias in job postings is they are almost always created for the job as it always has been, rather than for the job the company really needs; it is based on status quo, rather than for the future. This is particularly dangerous in a time when organizations are going through significant change, from organizational models all the way down to basic business models; and, this doesn’t even begin to account for the significant changes in society or employment patterns. The skills and knowledge required to adjust to new ways of doing business, new social standards, and new technologies are unlikely to be the same as those needed previously.  Even the velocity of change in today’s business environment necessitates new perspectives on organizational culture, risk tolerance, and business strategy.  Hiring new talent based on the old models, rather than looking for the new skills and knowledge needed, not only limits the candidate pool (both in size, and in diversity), it is also detrimental to the business in the long-run.

From the get-go, the hiring process can be significantly biased, creating the illusion that candidates simply don’t exist when there are plenty of talented people simply not being engaged with.

The Hiring Bias

During the hiring process, these same biases continue to pervade.  While many applicants have already been dissuaded from even applying, those that do apply are either rejected because they don’t meet the staid job requirements, or because  biases are prevalent within the actual hiring process.  These biases come from many angles, from misconceptions to the inability to accurately evaluate subjective experience.

One common bias is the persistent belief that “management” and “leadership” are not actual jobs, but personal capabilities or characteristics.  A seasoned engineer, with no management experience or education, is often hired to manage other engineers because it is believed the engineering capability is more important than the management/leadership capability; yet, it is also well known that expertise in one domain is not directly transferable to another.  Being an expert engineer has no more effect on being a competent leader, than being a competent leader makes you an expert engineer; they are two unique and differing roles.  Yes, research does suggest that managers are more effective when they understand the work of the people they manage, but experienced and effective leaders are adept at overcoming these challenges and would likely be a better choice regardless of where their experience came from.  These misconceptions, based on instinct rather than actual evidence, further constrain the perceived talent pool.

While expertise in one domain may not be directly transferable to another, unrelated domain, there can also be a bias towards believing that no experiential knowledge is transferable between domains.  This often arises when organizations have significant indifference to skills/knowledge obtained outside the specific industry or context.  For instance, research suggest great similarity in process between highly technical troubleshooting capability and retail customer support capability; while both processes differ in context and discrete knowledge, the basic process of assessment, identification, and resolution of issues is nearly identical.  Thus, while not completely transferable, the underlying capabilities are not without value.  This hidden value is not only true for many basic capabilities, but can also be a great source of innovation giving new perspectives.

There are also subjective evaluation biases.  A common one here often plays out in terms of organizational titles.  Someone who currently has a title of  “Sr. Director” is more likely to get an interview for a VP role, than someone with simply a “Director” title; and, certainly more likely than someone with a “Manager” title.  While these roles are often viewed as quantitative rankings, organizational titles and promotion requirements are rarely equated universally across organizations making them more subjective in nature. As a result, without context, organizational titles can be arbitrary and reflect less about the title owner’s actual capability than they do about the environment in which the title was achieved.  This has been empirically demonstrated with GPA rankings, promotion decision making, etc.  Furthermore, since it has been shown that even expert evaluators are subject to these biases, it is unlikely that HR recruiters and/or hiring managers are any less susceptible to falling prey to them.

Again, not only do these biases create artificial limits on the “talent pool”, they perpetuate the existing biases of other organizations and their hiring, promotion, and recognition practices.

The Versatilist Perspective

As a practitioner of strategy and innovation, it would be reckless to suggest the continued investment in developing and growing the talent pool is without merit.  Since knowledge/skill is the single greatest organizational asset, it is absolutely imperative we continue to find new ways to expand and develop our workforce in every possible way.  At the same time, until the board-rooms, executive offices, management ranks, and rank-in-file positions in our organizations reflect the same diversity as the society from which we hire them, it is also reasonable to suggest we may have a problem in our recruitment and hiring practices.  Until every employee is utilized to their fullest potential, we cannot simply suggest there isn’t enough “talent” to go around.

Instead of evaluating prospective hires based on what required skills they might not possess, we should also evaluate them based on the skills they do possess that could bring great value to the organization.  Does the candidate who has spent years working in social services bring valuable perspective to a technology company?  What about the candidate who has worked in retail for 20 years; might they not have value to a digital transformation initiative?  Is it easier to teach basic finance skills and understanding or is it easier to teach leadership skills?  Perhaps a seasoned leader, without years of finance experience, could bring new ideas to a finance leadership role.  Perhaps someone who has spent multiple years in various roles like engineering, sales, marketing, and product management might be more valuable in a leadership role in any of those areas (or an entirely different one) than someone who has simply spent an entire career in one alone.

An Exercise

Let’s do an exercise; a thought experiment if you will.  Would you hire me?  What would you hire me for? I am not suggesting that I should be hired for any particular role, but the point of this exercise is to start looking beyond the way we do things today in an effort to find the hidden talent that might be sitting right in front of us.

Look at the various jobs available in your organization and ask yourself if you would reasonably consider me for those positions (feel free to check out my LinkedIn profile).  If not, I challenge you to ask yourself “why not?” Do I not have the specific experience required?  Do I not have the educational requirements? List specifically the reasons I would not be a good fit.

Then I challenge you to do a little thought experiment and look beyond the specific accomplishments and roles, and try to imagine how any of these experiences could bring new insight and value to those roles.  Would my technical background bring new insight to the roll?  Would social media, marketing, or product management experience be a boon to that job responsibility?  What about leadership, personnel development, assessment, data science, strategic innovation?  I challenge you to look for the reasons you would hire me in that role rather than not.  Look for the positives, instead of the negatives and then compare the two lists.

Now, go back to those job postings and, whether you are the hiring manager or not, think about the people you work with, both internally as well as externally.  Think about the people you know socially.   If you are the hiring manager, go back to the prospective candidate resumes (all of them, not just the ones curated by recruiting) and look through them again. How many of those people have knowledge, skills, abilities, or experience that could bring interesting value to those jobs?  Does their potential value exceed the missing requirements?  Are you potentially missing great employees?

As long as we keep throwing away the stones that don’t have gold in them, we will never realize how many of them contained diamonds, or platinum (a precious metal that was once simply discarded).  Maybe you are looking for gold, but that doesn’t diminish the value of everything else; value that might exceed what you were looking for in the first place.  We don’t lack for talent or diversity in our candidates; we lack the ability to identify and see them for what they are.

 

Google Does Not Obviate “Knowing”

There is a strange notion making the rounds of social media in various forms, used to argue against traditional learning and assessment standards.  This reoccurring theme suggests the ubiquitous ability to leverage Google search, Wikipedia, or other online resources to find answers obviates the need to learn anything for yourself.  I.e., if we need to know something, we can just look it up in real-time and don’t need to waste time learning this information before we need it.  This theme has come up in discussions of our educational system curriculum, the supposed uselessness of standardized testing, and even in employee assessment criteria.

The Internet was never intended to be a replacement for independent knowledge.

Perhaps this is a special case of the Dunning-Kruger effect (Dunning, Johnson, Ehrlinger, & Kruger, 2003; Kruger & Dunning, 1999), but there are at least two clear reasons why access to knowledge is not equivalent to actually knowing it.  The first is a complete disconnect from the way human beings develop skill and competency.  The second is the assumption real-time knowledge, although ubiquitous, is accurate and will always be available.

Having Facts is Not “Knowing”

The most incongruous part of this idea is the assumption that knowledge is the result of just having a bunch of facts.  Thus, if you can just look up the facts, you have knowledge.  Unfortunately, unlike in the Matrix, human beings cannot simply download competence and expertise.

Learning something, and becoming good at it, is a process of building mental models on top of the foundation of rote facts

The study of experts and expert knowledge has well established the difference between experts and novices is not in what they know (the facts), but in how they apply those facts. It is based on how each fact fits with other facts or other pieces of knowledge. Expertise is the result of a process of integrating facts, context, and experience together and defining more refined and efficient mental models (Ericsson, 2006).  Learning something, and becoming good at it, is a process of building mental models on top of the foundation of rote facts.  This cannot be done without internalizing those facts.

In addition, returning to Dunning-Kruger, without building competence, individuals are incapable of discerning the veracity of individual facts.  Our ability to understand whether information is accurate, or of any substance, results from being able to rectify new information with our existing mental models and knowledge.  Those with less competence are the most unable to evaluate this information making them the most susceptible to not only accepting incorrect information as fact, but also of developing mental models incorrectly reflecting reality.

Limits of Ubiquitous Knowledge Access

Although those of use living in developed economies take ubiquitous access to knowledge for granted, this is not the case for all human beings, nor is it guaranteed to always exist.  It is estimated only about 50% of the world’s population is connected to the Internet, over two-thirds of which are in developed economies.  Even these figures bear further investigation, as those in developing countries with Internet access are far more likely to be connected by slower, less reliable means keeping their access from being truly ubiquitous.  Furthermore, while China contributes significantly to the world’s total Internet users, the Chinese government does not allow full, unrestricted access to the knowledge available via the Internet.  This leaves the number of people with true, ubiquitous access well below 50% of the population.

Even for those of us fortunate enough to have nearly ubiquitous access to an unrestricted Internet of knowledge, access is fragile.  Power outages as a result of simple failure, natural events, or even direct malice, can immediately render information inaccessible.  Emergency situations where survival might rely on knowledge also often exist outside the bounds of this seemingly ubiquitous access. Without a charge, or cellular connection, many find themselves ill-equipped to manage.

Dumbing Down our Society

The idea that access to knowledge is the same as having knowledge portends a loss of intellectual capital.  Whereas societies in the past have maintained control by limiting access to information, we are creating a future where control is maintained by delegitimizing and devaluing the accumulation of knowledge through full access to information.  We are positioning society to fail in the future because they will have not only become dependent on being spoon-fed information instead of actual learning, but will have also lost the ability to differentiate fact from fiction.

Not only is the idea that access to knowledge equates to having knowledge founded on shaky foundations lacking any kind of empirical basis, it undermines the actual development of knowledge

Although it would be nice to assume this is a dystopian view of the future, we are already seeing the effects of this process.  As social media becomes increasingly the way our society views the world around us, we can already see how ubiquitous access to information is affecting our perceptions of the world around us.  Without the ability to think critically, something only developed through the accumulation of knowledge and experience, in evaluating the real-time information we receive, our society is being manipulated into perspectives not of our own choosing, but the choosing of others.  We are losing the ability to process the information we receive and find ourselves increasingly caught in echo-chambers only presenting information supporting potentially incorrect world-views.

The Internet was never intended to be a replacement for independent knowledge.  It was developed to expand our ability to access information in the pursuit of developing knowledge and capability.  Not only is the idea that access to knowledge equates to having knowledge founded on shaky foundations lacking any kind of empirical basis, it undermines the actual development of knowledge.

 

 

Resources

Dunning, D., Johnson, K., Ehrlinger, J., & Kruger, J. (2003). Why people fail to recognize their own incompetence. Current Directions in Psychological Science, 12(3), 83–87. http://doi.org/10.1111/1467-8721.01235

Ericsson, K. A. (2006). An introduction to Cambridge handbook of expertise and expert performance: Its development, organization, and content. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Kruger, J., & Dunning, D. (1999). Unskilled and Unaware of It : How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated. Journal of Personnality and Social Psychology, 77(6), 1121–1134. http://doi.org/10.1037/0022-3514.77.6.1121

 

Ecosystems Thinking for Social Change

Ecosystem, or “system” thinking is not necessarily about ecology, but uses an ecological metaphor to explore the interconnectedness of various aspects of any system (Mars, Bronstein, & Lusch, 2014).  This is a critical skill for business organizations to aid strategy and innovation.  It is also an area where versatilists often shine, because versatilists are uniquely adept at taking deep knowledge from one system and applying it to their understanding of new systems, often leading to unique insights.  However, that is not what this blog is about.  This is about how a lack of systems thinking is trapping our society into repeating the same issues repeatedly.  This is about how electing people who comprehend systems thinking might be a better means of bringing about social change.

The Heart of Systems Thinking

At the heart of systems thinking is to keep in mind that no problem, no solution, no individual exists in a vacuum.  The whole world is a set of interrelated systems that influence and affect those around it; changes in one system ripple throughout our entire society.  Systems thinking involves attempting to understand and evaluate any problem or solution within the context of the bigger picture.

For instance, take constraint theory (Tulasi & Rao, 2012).  Constraint theory suggests that any system or process is constrained by the least capable or least efficient step in the system; this is often equated with the “weakest link” idea that a chain is only as strong as the weakest link in the chain.  The idea behind constraint theory, however, suggests that if you fortify the weakest link (solving that problem), you have simultaneously created a new “weakest link” (formerly, the next-to-weakest link).  In addition, the newer, stronger link may also have other unintended consequences (maybe by making it stronger, you have also made it bigger, which affects some other function).  In essence, the process to creating a stronger chain is a never-ending task as each solution has ramifications.

In systems thinking, you must evaluate how a solution to one problem may create a new problem or change the dynamics of another system.  This potential new problem must also be evaluated to determine if it is a bigger problem than the one you are attempting to solve, or makes the solution you have proposed untenable.  Problem solving, like creating a stronger chain, is a never-ending process.  However, the intended result is improving the overall whole, ensuring one solution doesn’t create a bigger problem somewhere else.

Unfortunately, without systems thinking, we are failing to create an overall better society, but are remaining mostly stationary.  Solutions examined and evaluated within a vacuum, create ripples that, instead of moving us forward, keep us in a constant state.

Examples of Non-Systems Thinking Challenges

The worst part of failing to apply systems thinking to the problems of our society, is when the same groups of people argue for two independent solutions, which are counterproductive; i.e., when the same group argues for one solution that aggravates another problem they are trying to solve and vice-versa.  It is important to understand that, in and of themselves, the proposed solutions may be perfectly good solutions; it is only when you combine the system effects that issue become apparent.  It is also important to note that this is not an analysis of the merits of any particular solution or point of view.  There is no intent to endorse or oppose any of the individual solutions, simply to illustrate the systems effects of those solutions.

Immigration Reform and Free Trade Agreements

By itself, building a border wall, while economically questionable, is a perfectly legitimate solution to preventing illegal immigration via our southern US borer.  It is not the only solution of course, but it is a possible solution.  We can debate this one way or the other, but even opponents must admit that it is a solution whether they agree it is the right one or not.

Similarly, eliminating or significantly reducing free trade, particularly with low-cost labor countries like Mexico is a legitimate solution to reducing job-loss in the US via off-shore outsourcing by US companies and keeping US investment in the US.  Again, not the only solution, but certainly one way to address the issue.  We can once again debate this, but it must be accepted that it is a solution.

However, when put together, these solutions are counter-productive.  By eliminating the ability for Mexico to continue to develop and build their economic capability (by granting them easy access to US markets and US investment), their standard of living will likely decline.  A decline in standard of living (loss of jobs and the income from it) only perpetuates the growth of illicit enterprises (e.g. drugs) as well as makes illegally emigrating to the US more attractive.  As such, from a systems perspective, reversing free trade agreements will likely compound the issue of illegal immigration, as well as drug smuggling and other issues.  This places even bigger demands on the needs of border protection and immigration control.  These are misaligned solutions from a systems perspective.

Welfare Reform and Birth Control

Another counterproductive combination of arguments is simultaneously arguing for reducing the US welfare system, while simultaneously arguing to eliminate birth control options, including sex education and access to safe abortions.  Again, in and of their own, each of these arguments are perfectly reasonable and can be understood.   Without applying personal judgements on them, they are reasonable goals and can be respected.  From a systems perspective however, these are not isolated issues or goals, they have complex interactions which makes arguing for both less reasonable.   The only logical result of limiting sex education and access to birth control measures is an increase in women and children within the welfare system.  It is illogical to argue for both actions, even if either one of them in isolation can at least be recognized as reasonable.

Cyber Security and Encryption Strength

Just this last week, there were two articles published.  The first one detailed how Russian hackers have been targeting the personal (non-government) cell phones of NATO soldiers to track, intimidate, and spy on them.  The second one detailed the Department of Justice (DoJ) pushing for US technology firms to make it easier for law enforcement to access the encrypted (personal) devices of accused criminals.  Again, arguing for improved safety and security of our personal information, particularly through strong encryption is a reasonable solution to rampant cyber crime.  It is also reasonable to argue that law enforcement should be able to access the information they need to convict criminals for breaking the law.  Unfortunately, you cannot reasonably argue for both as the one comes at the cost of the other.  Arguing that we need to better protect our personal information from thieves, while simultaneously arguing to hobble encryption for government access are mutually exclusive goals.

Understanding the Bigger Picture is Essential

Systems thinking requires us to look at the proposed solutions and understand their ultimate effects.  It asks us to better understand how seemingly separate systems interact and how changes in one creates ripple effects in others.  Besides allowing us to mediate between counter-productive arguments, systems thinking also provides an opportunity to discover new solutions.

By broadening our thinking, systems thinking allows us to uncover new solutions to old problems.  If we see how changes in one system can ripple into others, we can harness these ripples for positive change in our society.  It asks us to look at why things happen, at root causes, rather than addressing the ramifications or symptoms of those problems.  It allows us to explore how numerous problems in our society may be linked by ripple effects of similar issues we haven’t imagined.  For instance, could the rising costs of US health care (and its effects on treatment of mental health issues) be a progenitor of the rising threats of violence and recruitment of disaffected youth by terrorist organizations?   Could the antiquated US tax system be a progenitor of immigration challenges, job-loss through outsourcing, and increased income divisions?  Could US foreign policy be a bigger source of terrorist threats than religious extremism?  Systems thinking helps us see how solving one challenge may also have positive benefits on others.

Unfortunately, we do not look at problems as components in a unified system of systems, we tend to look at individual problems and argue solutions without thinking about the ramifications of those arguments.  We frequently miss the forest for the trees.  The effect is to leave us in a perpetual state of uncertainty, never moving society fully forward no matter how many problems we try to solve.  We never address the true source of the problem, only applying patches that don’t align and don’t solve the underlying problem.  We would all do better if we took a more holistic view of the problems we face, rather than reactively addressing symptoms.

 

References

Mars, M., Bronstein, J., & Lusch, R. (2014). Organizations as ecosystems: Probing the value of a metaphor. Rotman Management, 73–77.

Tulasi, C. L., & Rao, A. R. (2012). Review on theory of constraints. International Journal of Advances in Engineering & Technology, 3(1), 334–344. http://doi.org/10.2307/25148735

 

Improving Multiple-Choice Assessments by Limiting Time

Standardized, multiple-choice assessments frequently come under fire because they test rote skills, rather than practical, real-world application.  Although this is a gross over-generalization failing to account for the cognitive-complexity the items (questions) are written to, standardized assessments are designed to evaluate what a person knows, not how well they can apply it.  If that were the end of the discussion, you could be forgiven in assuming standardized testing is poor at predicting real-world performance or differentiating between novices and more seasoned, experienced practitioners.  However, there is another component that, when added to standardized testing, can raise assessments to a higher level: time.  Time, or more precisely, control over the amount of time allowed to perform the exam, can be highly effective in differentiating between competence and non-competence.

The Science Bit

Research in the field of expertise and expert performance suggests experts not only have the capacity to know more, they also know in a way differently than non-experts; experts exhibit different mental models than novices (Feltovich, Prietula, & Ericsson, 2006).  Mental models represent how individuals organize and implement knowledge, instead of explicitly determining what that knowledge encompasses.  Novice practitioners start with mental models representing the most basic elements of the knowledge required within a domain, and their mental models gradually gain complexity and refinement as the novice gains practical experience applying those models in real world performance (Chase & Simon, 1973; Chi, Glaser, & Rees, 1982; Gogus, 2013; Insch, McIntyre, & Dawley, 2008; Schack, 2004).

While Chase and Simon (1973) first theorized that the way experts chunk and sequence information mediated their superior performance, Feltovich et al. (2006) suggested these changes facilitated experts processing more information faster and with less cognitive effort contributing to greater performance. Feltovich et al. (2006) noted this effect as one of the best-established characteristics of expertise and demonstrated in numerous knowledge domains including chess, bridge, electronics, physics problem solving, and medical applications.

For example, Chi et al. (1982) determined that the way novices and experts approach problem-solving in advanced physics was significantly different despite all subjects having the same actual knowledge necessary for the problem solution; novices focused on surface details while experts approached problems from a deeper, theoretical perspective.  Chi et al. also demonstrated the novice’s lack of experience and practical application contributed to errors in problem analysis requiring more time and effort to overcome. While the base knowledge of experts and novices may not differ significantly, experts appear to approach problem solving from a differentiated perspective allowing them more success in applying correct solutions the first time and recovering faster when initial solutions fail.

In that vein of thought, Gogus (2013) demonstrated that expert models were highly interconnected and complex in nature, representing how experience allowed experts the application of greater amounts of knowledge in problem solving.  The ability for applying existing knowledge with greater efficiency augments the difference in problem-solving strategy demonstrated by Chi et al. (1982).  Whereas novices apply problem-solving approaches linearly one at a time, experts evaluate multiple approaches simultaneously in determining the most appropriate course of action.

Achieving expertise is, therefore, not simply a matter of accumulating knowledge and skills, but a complex transformation of the way experts implement that knowledge and skill (Feltovich et al., 2006). This distinction provides clues into better implementing assessments to differentiate between expert and novice: the time it takes to complete an assessment.

Cool Real-World Example Using Football (Sorry. Soccer)

In an interesting twist on typical mental model assessment studies, Lex, Essig, Knoblauch, and Schack (2015) asked novice and experienced soccer players to quickly and accurately decide the best choice of tactics (either “a” or “b”) given a video image of a simulated game situation.  Lex et al. used eye-tracking systems to measure how the participants reviewed the image, as well as measuring their accuracy and response time.  As one would expect, the more experienced players were both more accurate in their responses, as well as quicker. Somewhat surprising was the reason experienced players performed faster.

While Lex et al. (2015) determined both sets of players fixated on individual pixels in the image for nearly the same amount of time, experienced players had less fixations and observed less pixels overall.   Less experienced players needed to review more of the image before deciding, and were still more likely to make incorrect decisions.  On the other hand, more experienced players, although not perfect, made more accurate decisions based on less information.  The difference in performance was not attributable to differences in basic understanding of tactics or playing soccer, but the ability of experienced players to make better decisions with less information and taking less time.

The Takeaway

Multiple-choice, standardized assessments are principally designed to differentiate what people know, with limited ability to differentiate how well they can apply that knowledge in the real world.  Yet, it is also well-established that competent performers have numerous advantages leading to better performance in less time.    If time constraints are actively and responsibly constructed as an integral component of these assessments, they may well achieve better predictive performance; they could do a much better job of evaluating not just what someone knows, but how well they can apply it.

 

References

Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In Visual Information Processing (pp. 215–281). New York, NY: Academic Press, Inc. http://doi.org/10.1016/B978-0-12-170150-5.50011-1

Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7–75). Hillsdale: Lawrence Erlbaum Associates.

Feltovich, P. J., Prietula, M. J., & Ericsson, K. A. (2006). Studies of expertise from psychological perspectives. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Gogus, A. (2013). Evaluating mental models in mathematics: A comparison of methods. Educational Technology Research and Development, 61(2), 171–195. http://doi.org/10.1007/s11423-012-9281-2

Insch, G. S., McIntyre, N., & Dawley, D. (2008). Tacit Knowledge: A Refinement and Empirical Test of the Academic Tacit Knowledge Scale. The Journal of Psychology, 142(6), 561–579. http://doi.org/10.3200/jrlp.142.6.561-580

Lex, H., Essig, K., Knoblauch, A., & Schack, T. (2015). Cognitive Representations and Cognitive Processing of Team-Specific Tactics in Soccer. PLoS ONE, 10(2), 1–19. http://doi.org/10.1371/journal.pone.0118219

Schack, T. (2004). Knowledge and performance in action. Journal of Knowledge Management, 8(4), 38–53. http://doi.org/10.1108/13673270410548478

Fear Not the AI Overlords! Humans Have Intrinsic Value.

There is significant hype about Artificial Intelligence (AI) and its potential to take over many jobs thought safe from Automation.  It has been suggested AI could replace accountants, lawyers, doctors, and even general management activities.  While it is true that advances in AI will certainly change many jobs, as so often happens, the fear is exaggerated.  First, there is no evidence to support the notion automation has ever eliminated more jobs than it has created.  Second, and more importantly, humans have intrinsic value that is unlikely to ever be replicated or replaced.

The Fear of Losing Jobs

Before anyone gets too excited, a recent Wall Street Journal article highlights the facts of mass automation in the past.  Technology from the cotton gin through AI has always eliminated some jobs, but historically it has also created far more and better paying jobs as a result.  Sure surrey drivers were put out of work with the advent of the automobile, but the auto industry created millions of jobs supporting the US GDP for decades.  AI is simply the latest in a long-line of technological advances feared to lead to the end of our society.  It has never happened before and is unlikely to happen anytime soon.  It is true that some jobs may cease to exist, but this will be accompanied by a growth of new jobs supporting the AI industry.  Even more remarkable will be the new jobs that don’t even exist today.

A recent report from the Institute for the Future estimates 85% of the jobs today’s students will perform by the year 2030 haven’t yet been invented.  This is a difficult prospect for today’s workers to imagine, but it is not without precedent.  Student’s graduating high school in the 1990’s could not have imagined careers working in web design, social media, or – for that matter — artificial intelligence, machine learning, and big data.  Another recent article from MIT Sloan Management Review hints at some of the new jobs AI technology may create.

On top of all of that, it is unlikely many of the jobs being predicted to succumb to AI will actually go away.  It is much more likely they will be augmented and changed than disappear entirely.  And the reason is simple: humans have innate value in performing jobs in a human society.

Humans Have Intrinsic Value

Although AI is redefining what is considered automata by allowing more variation in performance, it is still not human.  Human beings are defined by the irrational and emotional more than they are by cold, calculated precision.  While this may seem to be a negative aspect of humans, it is also the source of the innovation, creativity, and passion that simply cannot be replicated.  Just for sake of argument, let’s examine just one of the jobs proposed to be replaced in the future by AI: management.

Business management is an oft misunderstood discipline, which does not benefit from the HR moniker “people manager”.  You manage objects, but you lead people.  Objects are managed to gain efficiency, but they have finite limitations. You cannot encourage a robot to be more productive.  You cannot ignite passion in your inventory tracking software to go above and beyond.  Yet human beings have nearly limitless capability to “reach for a goal”, “put in extra effort”, or “embrace shared visions”.  While this can also work to reduce human performance (as discussed in this article from MIT Sloan Management Review), this is critical distinction when looking at the effects of AI in particular.

Management, in its truest sense, is absolutely ripe for AI replacement.  Eliminating the idiosyncracies of human performance can have significant value to organizations.  AI is simply better able to gather, process, and act on vast amounts of data where human input is less vital (although not necessarily irrelevant).  By offloading these tedious and taxing responsibilities, while also improving their performance, humans can spend more time doing the things where they have intrinsic, and irreplaceable value (See article from Swiss Cognitive).

Leadership, on the other hand, will no longer need to take backseat to management.  By focusing on leadership, organizations will not only gain the advantages of AI-based management efficiency, but also from the benefits of stronger human performance.  In essence, organizational leaders will be able to offload the tasks they don’t do very well anyway, and focus on the actions that lead to truly superior performance.

Fear Not!!

While the example above focuses on my area of expertise, the same can be said for many other jobs ripe for AI augmentation.  AI, like the cotton gin and automobile before it, are tools that will augment and improve the way we work.  Yes, some jobs may be significantly reduced or eliminated; however, they will be replaced by newer and better jobs.  The jobs getting augmented by AI will simply change, putting more focus on the human aspect.  It is not the end of the world.

 

 

The Effects of Positive Psychology on Organizational Success

Intent is a powerful mediator of outcomes.  An organization’s mission, vision, and values set the direction of future success simply by codifying the organization’s intent.  This intent flows through every aspect of the organization, affecting the choices people make, and the outcomes of those actions.  (Read Simon Sinek’s “Start with Why“).

For instance, suppose you start a company to develop a better mousetrap.  You might start this company simply because we all know that if you build a better mousetrap, the world will beat a path to your door and generate huge profits.  On the other hand, you might start your company because you have a passion for eradicating the scourge of mice and their negative effects on human society.   These are both valid rationales for starting a new company and developing a better mousetrap.  If you succeed in building a better mousetrap, both intents are likely to be successful, at least initially.  Intent, however, will begin to show over time.  The company whose intent is purely profit driven will make choices supporting that mission, while the company whose intent is based on a passion for improving the world through mouse eradication will make different choices supporting that mission.  These choices affect the long-term viability of the business.

Not only will these differences in intent affect things like pricing, marketing, customer experience, and other traditional business decisions, they will also affect the people in the organization.  A positive intent keeps employees engaged and helps them grow as people and employees.  These are the effects of positive psychology and should not be discounted when an organiation considers why they do what they do.

Understanding Positive Psychology

Positive psychology, in short, is simply a focus to understand and investigate the positive capabilities and achievements of the individual, community/organization and society as a whole (Fredrickson, 2001; Quinn, Dutton, & Cameron, 2003; Seligman & Csikszentmihalyi, 2000; Sheldon & King, 2001).  Seligman and Csikszentmihalyi (2000), suggest that, prior to WW II, psychology was normally associated with three purposes: treating mental illness, encouraging the development and growth of all people, and the identification and development of exceptional capabilities. Since WW II, however, the focus of psychology has been solely on mental illness, deficit and pathology.  The other, more positive implications of psychology withered. While positive psychology is not intended to supplant existing research and theories on mental illness (Sheldon & King, 2001), it suggests that the typical deficit models practiced for over half a decade fail to adequately describe the realities of the population as a whole, as well as be a reminder of the original tenets of psychology foundation to include the good, the exceptional and the positive.

This positive approach has spawned any number of new approaches to the understanding of human development and organizational development focused on examining the positive instead of the negative.  For instance, towards the understanding of human development, theories have evolved to demonstrate that positive emotions or moments not only broaden an individuals perceptions of the world, but also build capabilities and personal resources which help them mitigate the effects of negative experiences (Fredrickson & Losada, 2005).  Additional research suggests this broadening capability is not solely constrained to mental perceptions, but to physiological perceptions as well like visual attention and field of view (Fredrickson, 2013). The resources accumulated from positive moments may not be abstract resources related to resilience and adaptability, but more discrete resources like attention to detail and capacity to learn.

Towards understanding organizational development, positive psychological approaches have generated new ways at looking at the process of creating exceptional organizations, not by fixing what is wrong, but by amplifying what is right.  Appreciative Inquiry is a model for change practices making the assumption that all organizations have an essentially positive capability to succeed.  By examining the past moments of peak performance, achievement and success, the organization can create a vision of the future based on those positive aspects (Quinn et al., 2003).

The common element in all of these ideas is that there is benefit in focusing on what is good instead of what is bad.  Focusing on the positive aspects creates a upward spiral of reinforcement (Quinn et al., 2003) and is a self-perpetuating process under normal circumstances (Fredrickson, 2013).  The corollary is that a focus on the bad would promote a downward, self-perpetuating cycle.  Thus, attempting to make change by focusing on the negative aspects of  what not to do, while one appropriate response to challenges, creates a self-defeating process promoting blame, low self-worth and incompetence.  Positive psychology suggests that focusing on what to do, on what was successful, is a better alternative as it creates a self-fulfilling cycle promoting excellence, success and achievement.

Brining it back to Intent

One could argue that a profit intent is not inherently bad; without making money, companies cannot sustain themselves.  Yet, without a positive focus beyond profit, without an intent that can inspire and create positive feelings, the organization is likely to diminish in productivity, innovation, and, ultimately, profit.  The concepts of positive psychology push us to appreciate a focus on the existence of the extraordinary and exceptional instead of simply on what is broken and dysfunctional (Seligman & Csikszentmihalyi, 2000).  It is an attempt to look at the positive potential of the future instead of simply examining the future as an attempt to overcome deficit.  It tells us that our intent has significant influence on our long-term outcomes.

References

Fredrickson, B. L. (2001). The role of positive emotions in positive psychology. The broaden-and-build theory of positive emotions. The American psychologist, 56(3), 218–226.

Fredrickson, B. L. (2013). Updated thinking on positivity ratios. The American psychologist, 68(9), 814–22. doi:10.1037/a0033584

Fredrickson, B. L., & Losada, M. F. (2005). Positive affect and the complex dynamics of human flourishing. The American psychologist, 60(7), 678–86. doi:10.1037/0003-066X.60.7.678

Quinn, R. E., Dutton, J. E., & Cameron, K. S. (2003). Positive Organizational Scholarship : Foundations of a New Discipline. San Francisco, CA: Berrett-Koehler. Retrieved from http://ezproxy.library.capella.edu/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=nlebk&AN=260674&site=ehost-live&scope=site

Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology: An introduction. American Psychologist, 55(1), 5–14. doi:10.1037//0003-066X.55.1.5

Sheldon, K. M., & King, L. (2001). Why positive psychology is necessary. The American psychologist, 56(3), 216–7. doi:10.1037/0003-066X.56.3.216

The Versatilist Vs. the Peter Principle

It is surprising how few are familiar with the Peter Principle.  This is most disturbing in the areas of business and organizational psychology as it speaks directly to the source of innumerable challenges for organizational success.  It should be a risk factor in talent management, succession planning, and organizational compensation systems.  Most of all, organizations should look at ways to circumvent this process; most notably, organizations should consider how Verstalists can thwart the Peter Principle.

The Peter Principle

“In a Hierarchy Every Employee Tends to Rise to His [Her] Level of Incompetence” (Peter & Hull, 1969, p. 25).

In action, the Peter Principle states a simple inevitability.  If you are good at your job, you get promoted.  If you are good at the new job, you get promoted again.  This continues until you take on a job for which you are not well suited; i.e. incompetent.  Having reached your level of incompetence, you no longer get promoted, but stay within the job you are least capable of performing well.   Taken to its ultimate conclusion, organizations eventually become dominated by leaders with the least capability to do their job.

Although published originally in 1969 as a tongue-in-cheek exposition on incompetence within human organizations, and supported by fictitious research, the Peter Principle continues to be debated amongst practitioners and academics.  It has been lambasted as unscientific (something it never purported to be) and crass overgeneralization, as well as an insightful source of legitimate inquiry.  The staying power of the Peter Principle maybe that its simplicity and succinctness, aligns with human experience and explains why so many organizations manage to do such stupid things.

The Value of the Peter Principle Perspective

Despite the limited academic basis for the Peter Principle, it manages to highlight a particular problem – why do we use promotion as a means of reward for competent performance (Fairburn & Malcomson, 2001)?  Doing so fails to consider two very fundamental truths: competence is domain specific, and management is a very specific skill. These fundamental truths conflict with the way most organizations reward and promote people.  Failure to acknowledge these promotes inefficiency, turmoil, and perpetuates the validity of the Peter Principle.

First, few organizations design their job families to reward and promote people for simply getting better and more efficient at the job they do.  Moving from job-specialist level 1, to job-specialist level 2, often requires doing different things, instead of doing the same things better.  We reward people, not for being good at their job, but for taking on new roles they have never done before, not proven they are capable of, and promoting constant change rather than long-term competence.   As soon as people demonstrate competence, we move them.

Second, organizations fail to realize management and leadership skills as a unique job all to themselves.  Being a good engineer says nothing about your ability to be a good engineering manager; however, good leadership skills can be a boon regardless of the function or industry. While understanding the jobs people need to perform is beneficial, leaders do not have to be competent in all the job functions they lead.  Promoting people who are competent in their job but shown no competence for leadership to positions of leadership, once again, promotes inefficiency and disruption.  Not only do you lose a competent performer in their prior role, but you may very well promote incompetent leadership.

Versatilists to the Rescue

Versatilists rarely run afoul of the Peter Principle.  First, versatilists are rarely promoted very high within most organizations, because they do not stay within any specific domain very long (something HR departments seem to think predicts success).  Second, because versatilists are deeply knowledgeable about many domains, they are keenly aware of what they are, and more importantly are not, capable of doing.  As such, versatilists without the desire or capability to lead will not pursue those opportunities.  Versatilists could be the savior for organizations looking to thwart the Peter Principle, but it will require HR to change their perspective on talent acquisition and development.

In terms of talent acquisition, HR and recruiting need to look beyond the experience requirements they believe are required for a job, and begin looking at the actual skills.  Far too often, organizations are looking for years of single domain experience (like engineering and software development) for roles that don’t necessarily require that experience (like leading engineering and software development teams).   The skills themselves are more important than the domain in which they were developed.   This is important for strategic innovation in particular, where having new perspectives brought to the job can be highly valuable.  A versatilists with leadership capability can quickly adapt to new industries and environments, while also bringing a host of new skills.

HR/Recruiting should also consider the quantity and quality of performance, rather than simply the length of performance, when looking at promotions or new hires.  Comparing two candidates for a position, a candidate who has shown success in multiple assignments and multiple environments over numerous years, should be preferred to one that has shown success in a single domain over the same time.  The candidate with multiple, differentiated success is much more likely to be successful in the new job as well; the one in a single domain is ripe to be reaching their level of incompetence.  Success in adapting to new environments is a skill companies should value, but don’t.

In terms of talent development, HR needs to create ways of rewarding specialists who do their job increasingly well over years of dedication without using promotions, while appreciating the versatilists who thrive in taking on new roles.  Promotions should not be the only means of rewarding top performers; bonuses and incentives should be used to drive continued competence building.  Promotions should only be used to expand and diversify the experiences of those already proving their ability to adapt and succeed in new roles.   HR needs to look beyond narrow definitions to find the people most likely to succeed, not those that have just been doing it longer.

As companies continue to struggle with market volatility, disruptive innovation, and dramatic shifts in business models, versatilism should be the new standard of performance.  What good is someone who has ten years’ experience in business models and practices that no longer hold true? Perhaps a new principle, the Versatilist Veracity should succeed the Peter Principle:

“Without Versatilists, a Hierarchy Tends to Become Incompetent”

 

References

Fairburn, J. a, & Malcomson, J. M. (2001). Performance, promotion, and the Peter Principle. Review of Economic Studies, 68(1), 45–66. http://doi.org/10.1111/1467-937X.00159

Peter, L. J., & Hull, R. (1969). The Peter Principle. Cutchogue, N.Y.: Willima Morrow & Co., Inc.