AI didn’t arrive in hiring quietly; it came with a promise: smarter, faster, fairer decisions. But concerns around AI bias in hiring are now front and center for employers. The first two scaled quickly. The third is still up for debate and, increasingly, for judgment in federal courts.
AI-powered hiring tools processed over 30 million applications in 2024 alone. At the same time, they triggered hundreds of discrimination complaints (HR Defense, 2025). A wave of lawsuits and a rapidly expanding patchwork of state and local regulations are now forcing every HR and talent acquisition leader to confront a fundamental question: Are the AI tools in your hiring process creating legal liability?
This is no longer a theoretical risk. The cases are active. The class sizes are enormous. And the precedents being set today will define AI hiring for the next decade. Here is what every TA leader needs to know, along with the steps to take right now.
A Legal Landscape in Motion
Litigation is no longer isolated. Mobley v. Workday — now a nationwide collective action potentially covering over one billion rejected applications — established that AI vendors can be held liable as agents of employers. Harper v. Sirius XM brought Title VII racial discrimination claims against an employer’s own AI screening tool. Complaints against HireVue and Intuit raised alarms about video-based AI penalizing deaf candidates and candidates of color. Amazon faces accusations of using AI to automatically deny disability accommodations. And the EEOC has already reached its first AI hiring discrimination settlement, for $365,000 — a signal that enforcement does not require a full trial to carry real cost.
What matters here is what they collectively tell us: the risk is active, the class sizes are enormous, and the precedents forming now will govern AI in hiring for years to come.
Why AI Bias in Hiring Happens — Even When It’s Not Supposed To
The assumption that algorithmic tools are inherently neutral because they don’t “see” race or age doesn’t hold up under scrutiny. A University of Washington study found AI models favored resumes with white-sounding names in 85 percent of cases. A Stanford study showed screening tools rated older male candidates above equally qualified female and younger applicants. Research from VoxDev found AI prefers female applicants over Black male candidates with identical credentials. A 2024 study found large language models producing racially biased decisions based on dialect patterns alone. The mechanism is what courts call “proxy discrimination” — ZIP codes, employment gaps, and school names correlate with protected characteristics without naming them. The system looks neutral; the outcomes are not.
As we outlined in our white paper AI + Human Judgment: How High-Performing TA Teams Win in 2026, uncontextualized automation amplifies these patterns rather than correcting them — eroding candidate trust and obscuring accountability in the process.
The Regulatory Environment: A Patchwork Building Toward a Standard
Even as federal enforcement has pulled back — the EEOC’s AI and Algorithmic Fairness Initiative was closed following an April 2025 executive order — states are accelerating in the opposite direction. New York City, California, Illinois, Colorado, Texas, and the EU have each enacted distinct AI hiring requirements, ranging from mandatory bias audits and candidate notice obligations to transparency assessments and human oversight mandates. The full breakdown of each law is covered in our companion piece, but the through line is consistent: compliance is now a jurisdiction-by-jurisdiction obligation, and the Colorado AI Act deadline of June 30, 2026 is closer than most teams realize.
Legal experts anticipate that as federal enforcement recedes, state agencies will become more active, and the volume of private litigation — like Mobley and Harper — will increase rather than decrease. As attorney Alissa Horvitz noted, employees and applicants can still hire private attorneys to bring disparate impact claims in court regardless of the federal posture. The compliance imperative is not going away. As regulations expand, AI bias in hiring is becoming a core compliance issue—not just a technical concern.
The People Science Perspective: Responsible AI Starts With Design
None of the lawsuits making headlines right now allege that companies set out to discriminate. They allege something more troubling: organizations moved fast, trusted their vendors, skipped the governance work, and ended up legally exposed for outcomes they never intended. That gap between adoption and accountability is where most AI bias risk lives.
ur position has always been that AI cannot replace human judgment in hiring, only augment it. The moment meaningful human oversight is removed from a consequential employment decision, the organization owns not just the operational risk but the moral and legal responsibility for what the system produces. As we explored in Designing Workforce Agility in the Age of AI, AI use across HR tasks climbed from 26 percent in 2024 to 43 percent in 2025, according to SHRM’s 2025 Talent Trends research. Adoption at that pace without governance infrastructure is not transformation. It is exposure.
The following steps are where that work begins.
______________________________________________________________________
6 Steps to Responsible AI Usage in Hiring
Whether you are currently using AI in your hiring process or evaluating vendors, these steps are not optional; they are foundational.
1. Audit Your AI Tools for Disparate Impact — Now:
Don’t rely on vendor assurances alone. Conduct internal or third-party audits to verify your tools aren’t producing adverse outcomes by race, age, disability, or gender. NYC’s Local Law 144 requires it. California’s regulations strongly encourage it. The Workday litigation has made clear that missing audit documentation materially weakens an employer’s legal position. If a vendor can’t explain their model or won’t share outcome data, that’s a red flag.
2. Preserve Meaningful Human Oversight at Every Decision Point:
AI should surface candidates and flag patterns, not make final employment calls. The Amazon disability accommodation case is the cautionary example: without meaningful human review, even well-designed tools can override legally required accommodations. Build explicit human checkpoints at every AI-assisted stage and maintain logs that document where human judgment was applied.
3. Require Contractual Transparency From AI Vendors:
Workday’s liability as an “agent” of employers set a clear precedent, but it doesn’t absolve the employer. Vendor contracts should require bias testing documentation, outcome transparency, nondiscrimination representations, and indemnification provisions. Ask vendors how the training data was sourced, whether the model has been tested for disparate impact, and what governance is in place post-deployment. Accountability can’t be outsourced, but risk can be allocated clearly.
4. Embed DEI Metrics Into Your AI Performance Framework:
Track diversity outcomes at every AI-assisted stage: sourcing, screening, assessment, and offer. Disproportionate rejection rates for any demographic group demand investigation, not explanation. Build these metrics into TA performance reviews alongside speed and cost. Equitable outcomes are a business performance measure, not a separate initiative. This is the approach built into our Recruiting Continuum and Hiregate analytics platform.
5. Train Your TA Teams on AI Limitations and Bias Risks:
The HireVue and Intuit complaints point to a specific gap: teams using video-based AI tools often don’t know how those tools can disadvantage candidates with disabilities or non-dominant communication styles. AI literacy is a compliance skill, not a technical one. Training should cover how disparate impact occurs, how to recognize bias signals, when to override automated recommendations, and how to document decision rationale.
6. Build an Adaptive Compliance Program for a Shifting Legal Landscape:
AI hiring compliance is no longer a single policy. It is a jurisdiction-by-jurisdiction program. Assign ownership within HR or legal, conduct annual reviews, and map your operations to applicable requirements. Track Mobley v. Workday and Harper v. Sirius XM as the precedent-setting cases for what responsible AI legally requires. The Colorado AI Act takes effect June 30, 2026. That deadline is closer than it appears.
The Intersection of DEI and AI: An Opportunity, Not Just a Risk
It would be a mistake to read this article and conclude that AI and DEI are in fundamental tension. They are not when AI is deployed with intention.
Properly governed, it can eliminate the inconsistency of human review, surface skills-based matches a resume-focused screener would miss, and identify high-potential candidates from non-traditional backgrounds. The potential is real. The conditions for realizing it: bias testing, human oversight, transparent data practices, and ongoing monitoring.
As we have argued in our work on AI and human judgment, workforce agility, and the AI talent handoff, the organizations winning in talent acquisition today are not those using the most AI. They are those using it most responsibly. In an era of accelerating litigation and expanding state regulation, that distinction has never mattered more.
People Science Is Your Partner in Responsible AI
Since 1997, People Science has worked at the intersection of talent acquisition strategy and technology. Through our Recruiting Continuum, RPP model, and Hiregate analytics platform, we help organizations use AI to improve hiring outcomes without compromising equity, compliance, or candidate experience.
If you are auditing your AI hiring tools for bias risk, building a DEI-aligned TA strategy, or navigating a complex compliance landscape, we would welcome the conversation.
Learn more: Book a meeting with People Science
