Artificial Intelligence
Imagine a world where your next job interview is conducted by artificial intelligence (AI), analyzing your every blink, twitch and vocal inflection. Now picture that same AI deciding whether you’re fit for a promotion based on your learning style in corporate training sessions.
Does this sound like science fiction? Not quite.
Welcome to the brave new world of AI in the training industry, where the line between innovation and ethical quagmire is as thin as a microchip.
From Silicon Valley to Wall Street, AI is making headlines – and not always for the right reasons. These aren’t just sensational headlines; they’re alarm bells echoing through the corridors of tech giants and startups alike.
As AI seeps into every corner of our lives, from the apps that predict our weather to the algorithms that shape our social media feeds, one industry stands at a critical crossroad – learning and development (L&D). Here, the promise of personalized learning experiences collides with concerns about privacy, bias and the very nature of human learning.
Are we on the brink of an educational revolution, or are we unknowingly coding the biases of today into the learners of tomorrow?
Let’s walk the ethical tightrope of AI in the training industry, where every step forward could be a leap toward progress or a stumble into an ethical abyss.
AI systems are only as unbiased as the data they’re trained on and the humans who design them. Let’s talk about HSBC’s adventure with AI-powered virtual reality (VR) training. Back in 2019, they partnered on a VR program for soft skills training, but it hit some bumps when they rolled it out globally. The AI, primarily trained on Western expression datasets, consistently misinterpreted common nonverbal cues:
In Hong Kong, the AI got confused by subtle Chinese communication styles. It thought people were being shy when they were just being polite!The AI often misinterpreted the Chinese practice of “saving face” as indecisiveness. For instance, when a Chinese employee said, “We might want to consider another approach,” the AI read it as uncertainty when it was a polite way of disagreeing.Also, the use of silence for reflection was sometimes flagged as disengagement by the AI when it’s often a sign of thoughtful consideration in Chinese culture.
Over in the Middle East, it missed the boat on local gestures and greetings.AI didn’t recognize the importance of the right hand in greetings and gestures. Using the left hand, which is considered impolite in many Middle Eastern cultures, wasn’t flagged as a faux pas.The system didn’t account for the closer physical proximity common in Middle Eastern business interactions, marking it as “invading personal space” based on Western norms.
Even in the UK, it struggled with British understatement when the Brits were just being British.When a British employee said, “That’s not bad,” meaning it was quite good, the AI interpreted it as lukewarm approval rather than positive feedback.Phrases like “I might suggest” or “Perhaps we could” were interpreted by the AI as a lack of confidence when they’re often used by Brits to politely but firmly make recommendations.
It got to the point where the VR scores didn’t match up with real-world performance. Imagine acing your job but failing in VR!
HSBC didn’t just shrug it off, though. They brought in cultural experts, added cultural settings to the VR and threw in some extra training on cross-cultural communication. They also made sure humans were keeping an eye on things, just in case the AI missed some cultural nuances.
HSBC’s story shows how tricky it can be to use AI for soft skills training across different cultures. But it also proves that with some tweaks and a willingness to learn, you can turn those challenges into some valuable insights for global business.
In the training industry, AI systems often collect vast amounts of data on learners’ behavior, preferences and performance. While this data can be used to improve learning outcomes, it also poses significant privacy risks if not handled properly.
To address these privacy concerns:
Strict data protection measures must be put in place to safeguard learners’ privacy. This includes being transparent about what data is collected and how it’s used.
Organizations should give learners control over their data, including the right to access, correct and delete their information.
Regular audits should be conducted to ensure compliance with data protection regulations and best practices.
Many AI algorithms, particularly those using deep learning, operate as “black boxes,” making it difficult to understand how they arrive at their decisions. In a training context, this lack of transparency can be problematic.
If an AI system recommends a certain learning path assessing a student’s abilities, both educators and learners should be able to understand the reasoning behind these decisions.
To improve transparency:
AI developers should prioritize creating interpretable models that can provide clear explanations for their decisions.
Organizations using AI in training should provide clear documentation on how their AI systems work and make decisions.
Regular reviews and audits of AI systems should be conducted to ensure they’re functioning as intended and to identify any unintended consequences.
In addition to addressing privacy and transparency, other steps can be taken to improve AI in training:
Greater diversity must be ensured in AI development teams for a wider range of perspectives, potentially reducing bias in AI systems.
Rigorous testing of AI systems for bias should be conducted before they’re deployed in educational settings. This includes testing with diverse data sets and involving a wide range of stakeholders in the testing process.
Ongoing monitoring and evaluation of AI systems should be implemented to identify and address any biases or issues that emerge over time.
By addressing these concerns, we can harness AI’s power to improve training outcomes while protecting learners’ rights and ensuring fairness. As educators, technologists and lifelong learners, we have a collective responsibility to shape an AI-driven future where technology augments human intelligence.
Ultimately, this journey isn’t just about smarter machines, but about nurturing smarter, more capable humans. Let’s embrace this challenge with open minds and an unwavering commitment to ethical progress.
Nirmala Ravi is associate vice president, solutioning, for ELB Learning. Email Nirmala at nirmala@elblearning.com or connect through www.linkedin.com/in/nirmala-ravi-5761332/.