METRICS&ANALYSIS
By Danielle Duran
We are driven to learn and succeed, with many of us also seeking some sort of proof that we have achieved or are on our way there. In our image, we want our systems to demonstrate their success as well. In life sciences, we need this from our quality systems and must provide evidence that we deliver a high-quality product that is safe and effective, and increasingly we may also need to show that it can be made efficiently.
We start our work, and evidence gathering, with training.
While a heavy reliance on reading standard operating procedures (SOPs) with an attestation of understanding is strongly desired by most businesses or the regulators, it is not uncommon for some companies. In these situations, when some of the SOPs are deemed higher in risk to criticality or cognitive complexity, a quiz may be added.
We naturally enjoy doing a job well, and it is motivating to be rewarded. We evolved this way to ensure from a young age and over our lifetimes that we would continue to adapt to an ever-changing environment.
Learning professionals know as well as anyone that the greatest motivation comes from a passion to learn, and the success that comes with it is a self-reinforcing pathway. Growing industry trends in gamification harness that energy and leverage to drive behavior. Tools that support gamification typically include capability to quantitatively demonstrate how much people might engage with a particular activity, or how behaviors are specifically changing.
In these cases it is easy to answer a question like, “how do you know this tool is working?” or “how do you know behavior is changing?” For all your other training activities, how are you answering that question?
Show me, don’t tell me. One of the most powerful tools in the toolbelt of a manager, trainer, teacher or any leader, the performance assessment gives the clear evidence of mastery, or lack thereof. In the field of training, it is a critical way of evaluating readiness, equally important for field sales reps as it is for manufacturing technicians.
In either case the qualification standards are defined ahead of time, including a matrix for scoring, and a trainer is prepared to conduct the evaluation. It’s shared with supervisors and, in the best cases, results are analyzed for trends and training is adjusted.
But it takes time to do. Time out of the field or off the floor to conduct, to prepare, or for conversations with management. The alternative (not knowing if someone is qualified for work) cannot be accepted, however, and so this valuable time is wisely spent to ensure qualified personnel are prepared for success.
Trainers, managers or subject matter experts (SMES) must take time from their core duties to define standards, design an evaluation matrix, and create training for trainers or managers to conduct the training and evaluation. The alternative (not having a standard to determine if qualification is met, or not having materials or a method for evaluation) would prevent the qualification from happening or happening in a consistent way, and so this wise investment is also made, often to widely varying degrees of robustness, which in turn, gives varying degrees of effectiveness. It’s an example of the common adage, “You get what you pay for,” though it’s not always heeded.
As learning professionals, many of us would have plenty to contribute here about building out programs to prepare for and measure performance, and many of us would have plenty to say also about coaching or managing to increase performance based on these types of evaluations. Technology tools are advertised all over, even some with artificial intelligence (AI), with claims of making it easier to do this kind of training.
What about assessment of knowledge transfer? How effectively are we measuring the uptake of information before we get to performance? How effectively are we using the resulting data? There are plenty of tools, new and old, to quiz a learner, and plenty of tools to cull and analyze data.
To make the investment worth it, and get the most bang for your buck, there are three critical components to ensure a knowledge assessment (a quiz or test) is effective and worth the time of personnel subjected to them:
Let’s look at those.
Foundational learning theory states that learners are more motivated when they can understand the purpose of an activity. It can also be argued that trainers, managers and leaders feel more invested in time allocated to training (and evaluation) activities when they too see a clear purpose. When designing a knowledge assessment to ensure there is a clear purpose:
Defining the goals of the assessment must be aligned to what the learner is expected to know as the result of some knowledge-transfer activity (at its most basic, this may be reading a document). We can acknowledge here that not all documents or presentation decks meant as “training” material are designed for effective training specifically, even if they are commonly used as such. We can also acknowledge that the purpose of the “training” document may not be clearly and succinctly distilled.
However, you may find yourself in a position to create a knowledge assessment (quiz or test) to evaluate the knowledge transfer after reading a document or presentation deck. If you want meaningful data and to ensure your colleague learners’ time is well spent, you must start here: Define what matters most and what new information the learners need to retain, interpret or apply after engaging with the document. In some cases this may be only a few key points, but it may be a couple handfuls or more, depending on the risk and robustness.
After writing your list, you may want to refine it further and prioritize. It would be ideal if you confirmed your list with area leadership or a procedure owner to ensure you are capturing what truly matters, and where they want evidence that knowledge transferred or where they think a health authority may want evidence.
Before writing questions (and answers if using multiple choice, or evaluation checklists if allowing long-form answers), you must keep in mind that questions must either reinforce a key point or provide evidence that knowledge has transferred. Writing questions that are purposefully tricky will only frustrate a learner and will likely not give reliable data later. It may also increase administrative burden if it requires a learner to request a quiz to be reissued or a LMS administrator to take additional system steps. Time is our most precious commodity; we must always respect our own and one another’s time.
Our lives today are such that we have near unlimited and often inexpensive access to information. We often unintentionally become information hoarders (How many unnecessary emails are in your inbox, even ads? How many photos, even ones that were meant to be temporary, are sitting on your phone?) and have acquired mounds of data we don’t (and maybe don’t intend to) use.
When it comes to knowledge assessments, we can exercise more discipline and gather only what we need and use all we gather.
Knowledge assessment data should be analyzed on at least two different levels: at the question level, and at the overall assessment level.
Reviewing at the question level may be easiest to start by first looking at the overall first-time pass rate of each question. It is standard to say that 80% and higher means good performance on an assessment, or by question itself (based on risk or complexity, you may have a higher standard). Find the questions falling below the 80% mark, perhaps starting with the lowest, and look to see if there is a trend in which wrong answers are chosen.
Resist the urge to start “fixing” at this point, which will be difficult because learning professionals inherently want to fix things. You may want to start rewriting questions or make changes to the document or training material. However, you have not yet finished your analysis to see all that you can see, and you have not yet looked at the full assessment pass rate, or conducted a root cause analysis to see why a particular question might be suffering.
Without understanding a root cause, your solutions may not be aligned to what is causing the issue, and you may end up with wasted time on your hands or the hands of others. Protect your precious commodity.
After taking a look at questions, zoom out and look at the overall pass-rate of your assessments. How many people had to retake it to get a passing score? Are there certain departments with lower overall pass-rates? Are there trends related to time when the assessment was taken?
Pass-rates may dip toward the due dates of training activities when people are rushing, or at times when there is a heavy load of training at once. There may be cultural differences in departments, or different departments may have been given context or even rationale for training. Or the results may not shed much light.
After your investigation, stop and consider what you are curious about, or what the new insights are making you want to do. Don’t take action yet, but be sure to take note because it is in the next set of actions you decide what to do.
If you are only gathering what you need, and using what you gather, you may be itching to act on the insights you have uncovered in your analysis. Before you write up a plan for what to do, consider the actions below. But, also pause to be sure you know the cause of any gaps before creating solutions so that your solutions can be aligned. Regardless of what solutions you decide are needed, the following four actions are cause agnostic and are a great place to start making improvements:
During the analysis of items on the assessment, it may have become clear after a root-cause analysis (or feedback from learners) that a question was not clearly worded or was unintentionally tricky. This is a great opportunity to revise the question and ensure it is focused on emphasizing a key point, or to gather evidence that transfer of knowledge occurred. It may be helpful to engage an SME or area leadership, or even a subset of learners, to ensure the improvement is meaningful.
Information on the performance on assessments should be shared with anyone who is designing training material or the documents (like SOPs or presentation decks) so that they can update wording for clarification or emphasis and more effectively transfer knowledge to learners.
Knowledge is power, and sometimes that power contains enough energy to cause a change simply by being shared. Ensuring that managers know the results of assessments, especially trends, is important to ensure they are reinforcing key points in their interactions with their people. Trainers need to know these trends so that they can adjust their interactions with learners.
Meeting these three components of effective assessments will ensure your learners are best set up for success and you will most successfully be able to show how you know the knowledge transfer occurred the next time you are asked.
Remember that the assessment results are as much an evaluation of the training material (or “training” material) as they are of your learners’ performance. The best way to protect your precious resource of time is to invest on the front end.
Your learners and managers will appreciate it, your quality culture will benefit, and you will have meaningful data to provide to learners, management, procedure owners and health authorities.
Danielle Duran is director, GxP compliance and training, Aimmune Therapeutics, a Nestle Health Science company.