Everyone knows how to compute an exam average. We use them all the time; they’re the most common measure of learning effectiveness (along with passing rates). But, while averages and passing rates are easy to compute, what if interpreting them is not so obvious?
Let’s explore two cases to understand how computing and reporting on average scores and passing rates may obscure serious training issues. For our examples, we will assume an exam with a passing score of 85% delivered to two different classes.
Two classes, the same average score but two different results and two very different implications for the training program. Clearly, something has gone very wrong with Class B. You need to dig deeper into the results, and probably the training, to figure out what happened. Why did half the learners perform so poorly? Only reporting the average score is not sufficient.
Of course, the testing of Class B does not end here. Most companies provide multiple tries to pass a high-stakes exam. I generally recommend three tries, but this policy varies from company to company.
Let’s assume three tries.
Class A reporting is straightforward: 100% passed with an average of 85%.
But what about Class B? If we just report the Class B passing rate, it is 100%. The same (but not really the same) as Class A.
And what about average score? Misleadingly, Class B has a higher average score than Class A! It doesn’t matter if we report only the passing tries of Class B, or all tries, Class B’s average is above 85%. If we report the average of all scores for Class B, their average is 88.5%. If we report only the passing or final tries of Class B, their average is 100%.
So, what can we do to report our results accurately? The solution is to break out the tests and retests (for Class B) and resist the temptation to group them all together into averages. Something like this:
It’s clear from the two tables that, although everyone passed the exam (eventually) in the two classes, the underlying data tells a very different story.
So, the key reporting takeaway is: Don’t allow averages or overall statistics to obscure important results. You need to disaggregate the data to gain accurate and actionable insights.
Steven Just Ed.D. is CEO and principal consultant at Princeton Metrics. Email Steven at sjust@princetonmetrics.com or connect through LinkedIn at linkedin.com/in/steven-just-081b76.