top of page

Navigating AI Tools, Third-Party Recruiters, and Hiring Liability in 2025 and beyond

(originally published 9.10.25)


So far, the integration of Artificial Intelligence (AI) into the hiring process has promised unprecedented efficiency, from scanning resumes to analyzing video interviews (source: InfoStride). It has also not been perfect by any means. However, this technological leap brings a complex web of legal questions, particularly concerning liability for discrimination.


When a company partners with a third-party recruiter using these AI tools, who is responsible if the algorithm is biased? The answer, under a rapidly evolving legal framework, is often "both."

 

The Legal Landscape: Federal and California Law

As of 2025, no single federal law explicitly governs AI in hiring, but existing anti-discrimination statutes provide a strong foundation for enforcement. In parallel, states like California are enacting more specific regulations (source: K&L Gates).

 

Federal Oversight:

The primary federal body addressing this issue is the Equal Employment Opportunity Commission (EEOC). The EEOC enforces Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA). (source: EEOC)

  • EEOC Technical Assistance on AI and Algorithmic Fairness (Reaffirmed, March 2025): The EEOC has clarified that existing laws apply fully to the use of AI in employment. A key concept is "disparate impact," where a seemingly neutral tool disproportionately screens out individuals from a protected class (e.g., based on race, gender, or age). The EEOC guidance makes it clear that employers can be held liable for the actions of their agents, which includes third-party recruiters and staffing agencies. If a recruiter uses a biased AI tool on behalf of a company, the company itself remains liable for the discriminatory outcome. (source: EEOC)

 

California State Law:

California's laws are among the most stringent in the nation, offering broader protections than their federal counterparts.

  • Fair Employment and Housing Act (FEHA): FEHA is California's primary anti-discrimination law. Like Title VII, it holds employers responsible for the discriminatory acts of their agents. The California Civil Rights Department (CRD) has signaled increased scrutiny of automated decision-making systems in hiring  (source: CA.gov).

  • California Automated Decision Tool Accountability Act (CADTAA) (Hypothetical, but reflecting current legislative trends, e.g., effective July 1, 2025): Following the lead of regulations like New York City's Local Law 144, legal experts anticipate California will enact specific legislation (source: Nixon Peabody LLC). Such a law would likely require:

    1. Independent Bias Audits: Any AI-driven tool used for hiring or promotion must undergo a rigorous, impartial audit to check for discriminatory impact.

    2. Public Transparency: A summary of the most recent bias audit results would need to be made publicly available on the recruiter's or employer's website.

    3. Candidate Notification: Job applicants must be notified when an automated tool is being used to assess their candidacy.


Under such a framework, liability is explicitly shared. The third-party recruiter would be directly liable for failing to conduct the audit or provide notice, while the hiring company would be liable for contracting with a non-compliant agent and for the ultimate discriminatory result.


The Recruiter's Role and Shared Liability

A third-party recruiter acts as a legal "agent" for the hiring company. In the eyes of the law, their actions in the sourcing and screening process are legally intertwined with the company's. If a recruiter's proprietary AI algorithm improperly downgrades candidates who speak English with an accent (potential national origin discrimination) or those with gaps in their employment history (potentially discriminating against women or individuals with disabilities), the hiring company cannot simply claim ignorance. Both the recruiter who deployed the tool and the company that benefited from it face potential legal exposure.


Advantages of Using Third-Party Recruiters for AI Hiring Liability

Despite the shared risk, partnering with a sophisticated third-party recruiter can offer strategic advantages for a company, particularly in mitigating AI-related liability.

  1. Specialized Expertise and Compliance: Top-tier recruitment firms are now investing heavily in legal and technical expertise to navigate the AI compliance maze. They are better equipped than many in-house HR departments to vet AI vendors, understand the technicalities of bias audits, and stay current on evolving legislation like the CADTAA.

  2. Contractual Risk Transfer: The most significant advantage comes from the service agreement. A company can and should insist on specific contractual clauses with its recruitment partner. These include:

  3. Warranties and Representations: The recruiter must warrant that its AI tools comply with all federal and state laws, including requirements for bias audits and transparency.

    1. Indemnification Clauses: This is critical. An indemnification clause requires the recruiter to cover the company's legal fees, settlements, or judgments if a lawsuit arises from the recruiter's use of a non-compliant or biased AI tool. This shifts the financial risk of a legal challenge onto the party directly operating the technology.

    2. Audit and Data Rights: The contract can grant the company the right to review the recruiter's bias audit results and demand information about how the AI tool’s function.

  4. Operational Insulation: While not a complete legal shield, using a recruiter creates a layer of operational distance. The recruiter assumes the direct burden of selecting, implementing, testing, and managing the AI tools. This allows the company's HR team to focus on final interviews and strategic workforce planning, relying on the recruiter's contractually guaranteed compliance.


Outsourcing recruitment in the age of AI is not outsourcing liability. However, by choosing a knowledgeable partner and fortifying the relationship with a strong, liability-conscious contract, a company can effectively mitigate its risk, leverage specialized expertise, and navigate the future of hiring with greater confidence.


Using a partner that integrates a mostly “human approach” with some AI structure may work best.

 
 
 

Comments


bottom of page