ArtificialIntelligence
In an LTEN Focus on Training magazine article in the October 2024 issue, we discussed concepts around generative artificial intelligence (AI) and its differences from traditional AI. This issue let’s look at implementation.
Organizations must tread lightly when it comes to integrating generative AI into the workflow. Before integrating generative AI there are several key tasks that need to be tackled to put some guardrails in place for the generative AI users in your organization to protect your business and customer data.
The top four tasks in order of importance include:
An AI policy is a critical component for any organization planning to implement AI technologies. Without a well-defined AI policy, the deployment of AI can lead to ethical dilemmas, data privacy issues and a lack of accountability.
Here’s why an AI policy is important:
Guides ethical use: An AI policy outlines the ethical guidelines for AI deployment, ensuring that the technology is used in a manner consistent with the organization’s values. This includes preventing misuse and ensuring that AI applications do not cause harm.
Ensures data privacy and security: AI systems rely heavily on an AI policy that defines how data will be collected, stored and used, ensuring compliance with data protection regulations and safeguarding user privacy. This is crucial in maintaining the trust of stakeholders and preventing data breaches.
Promotes transparency: Transparency in AI operations is vital for trust, and an AI policy mandates that the decision-making processes of AI systems are clear and understandable. This helps in building trust with users and stakeholders by showing how and why certain decisions are made.
Regulates AI usage: Without a policy, there could be unregulated and inconsistent use of AI across the organization, leading to potential risks and inefficiencies. An AI policy standardizes AI usage, ensuring that all AI initiatives are aligned with organizational goals and regulatory.
Facilitates accountability: An AI policy assigns responsibility for the development, deployment and monitoring of AI. This accountability is essential for addressing any issues that arise and for continuous improvement of AI applications.
Organizations that implement AI without a clear policy risk facing ethical controversies, legal challenges and potential harm to their reputation. Therefore, developing a comprehensive AI policy is a foundational step in the responsible implementation of AI technologies.
The next key task is to decide which generative AI tools your organization will allow employees to use along with the use cases. Choosing the right AI tools is critical to ensure that the technology is effective and aligned with your organization’s needs. You should evaluate generative AI tools like you would evaluate a new car. Here’s a list of key factors to consider when evaluating AI tools:
Compliance with policies and regulations: Does the AI tool you select comply with both your organization’s AI policy and external regulations? This includes adherence to data privacy laws, ethical guidelines and industry standards.
Accuracy and reliability: How accurate and reliable is the output of the AI tool? Look for tools that have a proven track record and have been rigorously reliable. AI tools reduce the risk of errors and increase the credibility of AI-generated outputs.
Transparency and explainability: Does the AI tool offer transparency in their decision-making processes? Explainability is crucial, especially in fields like life sciences, where understanding the rationale behind AI decisions is important for trust and credibility.
Scalability and flexibility: Can the AI tool scale according to your organization’s needs, and are they flexible enough to be adapted to different tasks and use cases? This includes the ability to integrate with existing systems.
Supplier reputation and support: Consider the reputation of the AI tool supplier and the support they offer. Reputable suppliers with strong customer support can provide valuable assistance in implementation.
Bias mitigation: What measures does the AI tool supplier have in place for bias mitigation? AI tools should have mechanisms to identify and correct biases to ensure fairness and inclusivity in their outputs.
User-friendliness: Is the AI tool user-friendly and easy to integrate into your existing workflows? Tools that require extensive training or complex integration processes can hinder adoption and usage.
Cost-effectiveness: What is the cost of the AI tools in relation to their benefits? Cost-effectiveness includes not only the initial purchase price but also ongoing maintenance, training and support costs.
Data storage: Where is your data stored? How is your data used? Ensure that your company and customer data is protected. You also want to ensure that your company and customer data will not be used to train any AI models or build any new tools created by the supplier.
By carefully evaluating AI tools based on these criteria, organizations can ensure that they select tools that are not only effective and reliable but also aligned with their ethical and operational standards.
Human-in-the-loop systems combine the efficiency of AI with the critical thinking and ethical judgment of humans. This hybrid approach is extremely important, and it offers several benefits:
Quality control: Humans can review and validate AI-generated outputs to ensure they meet quality standards.
Bias mitigation: Human oversight helps identify and correct biases in AI models, promoting fairness and inclusivity.
Ethical decision-making: Humans can make nuanced decisions that consider ethical implications, which AI alone might overlook.
The human role in AI systems is indispensable for several reasons. One major concern is AI hallucination, where AI models generate content that isn’t based on real-world data or logical inference. This can lead to misinformation and incorrect decisions, especially in fields like life sciences where accuracy is crucial. Human oversight helps identify and correct these errors before they cause harm.
Additionally, while AI can process vast amounts of data, it often lacks the contextual understanding that humans possess. Humans can interpret nuances, understand cultural contexts and apply common sense in ways that AI cannot. This ensures that AI outputs are relevant and aligned with organizational goals.
Ethical decision-making also often requires moral reasoning and empathy that AI lacks. Human involvement ensures that ethical standards are maintained, considering broader societal impacts.
Humans are also more adaptable to new information and changing circumstances than AI. This adaptability is essential in dynamic environments like life sciences, where new discoveries and regulatory changes are frequent. Human-in-the-loop systems can quickly adjust to these changes, ensuring AI applications remain relevant and effective.
Accountability and trust are another critical aspect. When decisions are made solely by AI, it can be challenging to identify who is responsible for errors or ethical breaches. Human oversight ensures a clear chain of responsibility, which is crucial for maintaining trust among stakeholders. Finally, complex problem-solving often requires creativity, intuition and interdisciplinary knowledge that humans bring to the table.
In training and development, especially within life sciences crafting effective learning programs requires understanding human behavior, motivation and educational principles — areas where human expertise is indispensable.
Determining who is responsible for the content that generative AI produces is an interesting topic for many. Some believe that they are not responsible for the outputs of generative AI, while others believe that those who use generative AI are completely responsible for the output and how it is used. Being able to responsibly navigate this touchy topic is important.
When it comes down to brass tacks, ultimately, the organization deploying the AI system holds the primary responsibility for its outputs. This means that any content generated by AI must be carefully vetted and approved by human overseers within the organization to ensure its accuracy, appropriateness and adherence to ethical standards. This means that as a user of generative AI within an organization, it is your responsibility to carefully review and fact check the outputs generative AI produces.
Most recently, users of generative AI tools like Stable Diffusion have faced potential liability for creating content that closely resembles copyrighted works. Researchers have demonstrated that through selective prompting strategies, users can produce outputs that infringe on copyright protections, raising the possibility of users being held responsible for such violation.
AI developers and suppliers also play a significant role in this process. They must ensure that their tools are safe, reliable and aligned with ethical guidelines. This involves providing robust training data, maintaining transparency in AI operations and implementing mechanisms for bias mitigation. By doing so, they help organizations use AI responsibly and effectively.
In July 2024, a federal judge in California allowed a class action lawsuit against Workday to move forward. The lawsuit alleges that Workday’s AI-driven job applicant screening software discriminates against candidates based on race, age and disability.
The plaintiff, Derek Mobley, contends that he was unfairly rejected for more than 100 jobs due to biases inherent in Workday’s algorithms. The court’s decision to consider Workday more than an “agent” of the employers using its software, thereby subjecting it to anti-discrimination laws underscores the legal responsibilities of AI providers. This case sets a significant precedent, emphasizing that companies deploying AI technologies can be held liable for discriminatory impacts, even if they are not directly involved in the hiring process.
As generative AI continues to evolve, it’s important to remember that technology should complement human expertise rather than replace it. There are lots of things that humans do better than AI. The integration of generative AI in the workplace shouldn’t be just about adopting new tools, it should be about creating an environment where humans can use AI to enhance their capabilities, creating a balanced approach that will enable life science trainers and L&D professionals to develop innovative training solutions that are both effective and ethically sound.
Finally, let’s create an action plan:
Craft an AI policy: Team up with key stakeholders to develop a comprehensive AI policy covering ethics, data privacy and transparency.
Curate approved tools: Build a list of vetted AI tools that fit your organization’s needs and align with your policies.
Ensure human oversight: Set up processes for human-in-the-loop supervision in AI applications to maintain quality, fairness and ethical standards.
Upskill your team: Offer training on effective and ethical AI tool use, emphasizing the crucial role of human judgment.
Stay vigilant: Regularly assess AI applications, track their performance and gather user feedback to drive continuous improvement.
As Dale Carnegie wisely said, “Inaction breeds doubt and fear. Action breeds confidence and courage. If you want to conquer fear, do not sit home and think about it. Go out and get busy.”
Myra Roldan is a technology thought leader, author and speaker with decades of experience in AI and disruptive technologies. Connect with Myra through https://linkedin.com/in/myraroldan.