By Maggie Mancini
Artificial intelligence and other technological advancements have fundamentally redefined work and background screening processes are in the midst of a digital evolution. HR and organizations must consider how to adapt to an ever-changing regulatory landscape and navigate the impact that AI tools have had on the job search and employment market. Ensuring organizations are making the most of the technology—and finding ways to verify human job seekers amid a deluge of fake or fraudulent candidates—is essential to streamlining the hiring process and delivering quality candidates quickly and efficiently.
When it comes to finding and leveraging ethical uses of AI for background screening, Kirsten Wiegman, chief marketing officer at InCheck, stresses the importance of ensuring that AI augments—but doesn’t replace—human judgment.
"From a business perspective, AI supports a multitude of day-to-day work across functions," she says. "For example, client services can quickly create additional touchpoints, summarize a service ticket issue without reading a trail of emails, or automate and simplify new account onboarding tasks."
Operationally, leaders can start by targeting task-oriented, repeatable work that AI can handle. When AI is responsible for task routing or document parsing, leaders can reallocate time for research-intensive human reviews, Wiegman explains.
"AI can offer significant efficiency and data-handling advantages, but it must be applied ethically, especially in a field as sensitive as background screening," says Alan Lasky, SVP of client success and development at Reliable Background Screening. "One major ethical concern is the potential misuse of AI to make blanket decisions based solely on algorithmic data, rather than conducting the individual assessments required under federal guidance such as the EEOC's 2012 factors and various state-level Ban-the-Box and Fair Chance hiring laws."
There have already been consequences for misused AI in the courts, Lasky explains. One background screening company settled for $4.46 million due to lack of human oversight in AI-driven decision-making. In another case, companies paid over $2 million in settlements tied to AI scoring systems that allegedly discriminated based on race and violated ADA requirements.
There are critical guardrails to consider, Wiegman adds. These include:
human-guided workflows to ensure automated steps and communications align with business goals, compliance requirements, and brand strategy;
secure environments that do not use personal information for model training; and
organization-wide training so teams know when AI is appropriate and when it is not.
"While AI can be a powerful tool, it must be deployed with strong human oversight and a commitment to fairness," Lasky says. "Without these guardrails, companies risk prioritizing speed over accuracy, a mindset that can have Titanic-like consequences."
Fraudulent applications are on the rise and AI is both part of the problem and the solution, Lasky explains. Recent studies indicate that 70% of companies now use AI in their hiring processes. Gartner predicts that by 2028, one in four candidate profiles will be fake. And in the U.K., 67% of large employers have already reported increases in application fraud, with many attributing the surge directly to AI-generated content, he adds.
"To combat this trend, HR and TA leaders must adopt AI tools of their own, along with rigorous/comprehensive verification processes," Lasky says. Best practices include:
comparing resume data to verified records;
incorporating live or video interviews to assess authenticity and consistency;
conducting comprehensive background checks; and
using primary source verification of education and employment.
There are several ways that HR and TA leaders can utilize comprehensive background screening solutions to avoid fraudulent or fake candidates. For Wiegman, it's important that HR leaders add identity verification before running other records, since placing document authentication and selfie-liveness checks at the front of the workflow helps prevent impersonation and duplicate profiles. This ensures downstream background checks are tied to the correct person, which improves overall screening integrity.
"We are still in the 'wild west' era of AI in hiring," Lasky says. "With federal, state, and municipal authorities continuing to define what constitutes 'fair' and lawful use of AI, companies must be especially cautious, not only in how they use AI themselves, but also in how their vendors use it on their behalf."
Here are some best practices for reducing legal and compliance risks related to AI, Lasky adds.
Include an AI clause in vendor contracts. Work with legal counsel to ensure vendors are contractually obligated to use non-discriminatory data and comply with all relevant laws.
Maintain human oversight. AI should never be the sole decision-maker. Final hiring decisions should involve human review and be tailored to the individual candidate.
Standardize hiring team training. Ensure all staff involved in hiring are trained consistently on the organization’s policies and relevant legal requirements.
Know the jurisdictions. Consider where the candidate resides, where they will work, and where the company is located. Each may have different laws that apply.
Stay informed. AI regulations are evolving rapidly. Keep up with legal updates and adapt practices accordingly to remain compliant.
Candidates see clear status cues that tell them what is happening now, what comes next, and when, Wiegman adds. Forms are mobile-friendly, language is plain, and help is easy to find, which keeps drop-off low. It's also important to track what matters, like completion rates, time to first result, and where people get stuck. The overall aim is fewer surprises, fewer stalls, and reliable decisions at a predictable cost, Wiegman explains.