Technology
Sci-fi author and cyberpunk pioneer William Gibson once said, “the future is already here, it’s just not evenly distributed.” The meaning of his quote is clear. As technology meets the market, different users engage with, adopt and use the technology at very different rates.
Remember when the early adopters with iPhone or Android were sending text messages to friends who were still on flip phones? The same is true with AI (artificial intelligence). Although virtually all training teams are leveraging AI in some fashion, they’re not all aligned on speed or direction.
In this article, we’ll share some of the lessons learned with companies such as Sanofi, Novartis, Regeneron, Bayer, Biogen, UCB, Otsuka, Astellas and EMD Serono, in the hopes that it helps other teams adopt AI successfully. Let’s start with simple lessons that will be useful to almost anyone, but also cover learnings for enterprise-grade AI programs.
It’s clear that the toolkit generative AI has delivered is being widely adopted. Most people in the field now routinely use ChatGPT, Claude, Perplexity, Midjourney and other common tools.
Even just a year earlier, that wasn’t the case. But training teams tend to be forward-looking and are always seeking a competitive edge to help them better serve their learners. Nowhere is that more evident than in the broad adoption of AI.
If you’re using consumer-facing tools such as ChatGPT or Claude, get a paid subscription. Most tools offer one for as low as $20 per month. The paid versions offer you more control over your data. Remember, if the service is free, you are the product!
ChatGPT has a “temporary mode” that’s similar to anonymous mode with your Web browser. ChatGPT, for example, does not remember private chats or train the model based on them. If you’re working with information you’d rather ensure is only used once, and never again, switch into temporary mode.
As trainers, we’re always excited to find learners getting value from the material we deliver. After all, that’s the business we’re in.
However, our charter is to train humans, not machines. The large language models (LLMs) that drive generative AI are hungry for training data and “learn” from our chats.
That may be fine for household uses of AI, but when you’re working with business-critical information, you’ll want to ensure that the information remains private. ChatGPT has two options for that, and other tools have similar functionality.
ChatGPT has a toggle under Settings called Memory that allows it to remember your chats so that it can, over time, deliver more relevant results. If you’ve any concern about proprietary information in a chat, turn this setting Off.
Again, under Settings, there’s a toggle marked “improve the model for everyone.” If you routinely work with sensitive information, switch this off to eliminate any chance of your information being used as training data.
Your IT and compliance team keeps everyone’s information secure. Everything from the rise of the internet to the adoption of phones with built-in cameras has made their job harder, but generative AI may be the biggest challenge yet.
So, get their guidance as you deploy new tools and processes. And consider that consumer-level AI tools may not be secure by default. For critical use cases, consider an enterprise-ready AI solution from a supplier with experience deploying solutions securely — and successfully — in your space.
A trend we’ve seen with large customers is the establishment of Responsible AI Committees (or similarly named committees focused on ethics). This is a great step, in my opinion. Generative AI gives us great power and, as Spider-Man’s Uncle Ben said, “with great power comes great responsibility.”
As you work to roll out AI solutions more broadly, find out if your organization has such a committee or group. Understand the committee and its role and, if possible, network to get to know the members — or, if it’s a cross-functional team as many are, even volunteer your time to help.
Generative AI enables scale and speed but remember the key rule of scaling: “Nail it, then scale it.” Begin with a well-defined objective, audience and timeframe.
Sanofi, Novartis and UCB all began with that crisp definition of a pilot. For Sanofi, it was improving speed to market by streamlining certification for one specific immunization. At Novartis, it began with the simple idea of giving field team members concrete, actionable feedback on their interactions with HCPs.
Those may seem like very different pilots, but they are all well-defined, time-limited and focused on a specific objective — factors that lead to a successful organization-wide rollout.
LTEN’s members may be trainers, but we’re learners too. I’m sharing these hard-earned lessons in the hopes that they’ll shorten your learning curve on deploying AI successfully and driving change, and success, across your organization.
Noah Zandan is CEO and founder of Quantified. Email Noah at noah@quantified.ai or connect through linkedin.com/in/noahzandan.