Improving Multiple-Choice Assessments by Limiting Time

Standardized, multiple-choice assessments frequently come under fire because they test rote skills, rather than practical, real-world application.  Although this is a gross over-generalization failing to account for the cognitive-complexity the items (questions) are written to, standardized assessments are designed to evaluate what a person knows, not how well they can apply it.  If that were the end of the discussion, you could be forgiven in assuming standardized testing is poor at predicting real-world performance or differentiating between novices and more seasoned, experienced practitioners.  However, there is another component that, when added to standardized testing, can raise assessments to a higher level: time.  Time, or more precisely, control over the amount of time allowed to perform the exam, can be highly effective in differentiating between competence and non-competence.

The Science Bit

Research in the field of expertise and expert performance suggests experts not only have the capacity to know more, they also know in a way differently than non-experts; experts exhibit different mental models than novices (Feltovich, Prietula, & Ericsson, 2006).  Mental models represent how individuals organize and implement knowledge, instead of explicitly determining what that knowledge encompasses.  Novice practitioners start with mental models representing the most basic elements of the knowledge required within a domain, and their mental models gradually gain complexity and refinement as the novice gains practical experience applying those models in real world performance (Chase & Simon, 1973; Chi, Glaser, & Rees, 1982; Gogus, 2013; Insch, McIntyre, & Dawley, 2008; Schack, 2004).

While Chase and Simon (1973) first theorized that the way experts chunk and sequence information mediated their superior performance, Feltovich et al. (2006) suggested these changes facilitated experts processing more information faster and with less cognitive effort contributing to greater performance. Feltovich et al. (2006) noted this effect as one of the best-established characteristics of expertise and demonstrated in numerous knowledge domains including chess, bridge, electronics, physics problem solving, and medical applications.

For example, Chi et al. (1982) determined that the way novices and experts approach problem-solving in advanced physics was significantly different despite all subjects having the same actual knowledge necessary for the problem solution; novices focused on surface details while experts approached problems from a deeper, theoretical perspective.  Chi et al. also demonstrated the novice’s lack of experience and practical application contributed to errors in problem analysis requiring more time and effort to overcome. While the base knowledge of experts and novices may not differ significantly, experts appear to approach problem solving from a differentiated perspective allowing them more success in applying correct solutions the first time and recovering faster when initial solutions fail.

In that vein of thought, Gogus (2013) demonstrated that expert models were highly interconnected and complex in nature, representing how experience allowed experts the application of greater amounts of knowledge in problem solving.  The ability for applying existing knowledge with greater efficiency augments the difference in problem-solving strategy demonstrated by Chi et al. (1982).  Whereas novices apply problem-solving approaches linearly one at a time, experts evaluate multiple approaches simultaneously in determining the most appropriate course of action.

Achieving expertise is, therefore, not simply a matter of accumulating knowledge and skills, but a complex transformation of the way experts implement that knowledge and skill (Feltovich et al., 2006). This distinction provides clues into better implementing assessments to differentiate between expert and novice: the time it takes to complete an assessment.

Cool Real-World Example Using Football (Sorry. Soccer)

In an interesting twist on typical mental model assessment studies, Lex, Essig, Knoblauch, and Schack (2015) asked novice and experienced soccer players to quickly and accurately decide the best choice of tactics (either “a” or “b”) given a video image of a simulated game situation.  Lex et al. used eye-tracking systems to measure how the participants reviewed the image, as well as measuring their accuracy and response time.  As one would expect, the more experienced players were both more accurate in their responses, as well as quicker. Somewhat surprising was the reason experienced players performed faster.

While Lex et al. (2015) determined both sets of players fixated on individual pixels in the image for nearly the same amount of time, experienced players had less fixations and observed less pixels overall.   Less experienced players needed to review more of the image before deciding, and were still more likely to make incorrect decisions.  On the other hand, more experienced players, although not perfect, made more accurate decisions based on less information.  The difference in performance was not attributable to differences in basic understanding of tactics or playing soccer, but the ability of experienced players to make better decisions with less information and taking less time.

The Takeaway

Multiple-choice, standardized assessments are principally designed to differentiate what people know, with limited ability to differentiate how well they can apply that knowledge in the real world.  Yet, it is also well-established that competent performers have numerous advantages leading to better performance in less time.    If time constraints are actively and responsibly constructed as an integral component of these assessments, they may well achieve better predictive performance; they could do a much better job of evaluating not just what someone knows, but how well they can apply it.

 

References

Chase, W. G., & Simon, H. A. (1973). The mind’s eye in chess. In Visual Information Processing (pp. 215–281). New York, NY: Academic Press, Inc. http://doi.org/10.1016/B978-0-12-170150-5.50011-1

Chi, M. T. H., Glaser, R., & Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence (Vol. 1, pp. 7–75). Hillsdale: Lawrence Erlbaum Associates.

Feltovich, P. J., Prietula, M. J., & Ericsson, K. A. (2006). Studies of expertise from psychological perspectives. In The Cambridge handbook of expertise and expert …. New York, NY: Cambridge University Press.

Gogus, A. (2013). Evaluating mental models in mathematics: A comparison of methods. Educational Technology Research and Development, 61(2), 171–195. http://doi.org/10.1007/s11423-012-9281-2

Insch, G. S., McIntyre, N., & Dawley, D. (2008). Tacit Knowledge: A Refinement and Empirical Test of the Academic Tacit Knowledge Scale. The Journal of Psychology, 142(6), 561–579. http://doi.org/10.3200/jrlp.142.6.561-580

Lex, H., Essig, K., Knoblauch, A., & Schack, T. (2015). Cognitive Representations and Cognitive Processing of Team-Specific Tactics in Soccer. PLoS ONE, 10(2), 1–19. http://doi.org/10.1371/journal.pone.0118219

Schack, T. (2004). Knowledge and performance in action. Journal of Knowledge Management, 8(4), 38–53. http://doi.org/10.1108/13673270410548478

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s