Bad AI Bunny: The lawsuits against Workday and Eightfold – and what they mean for recruiting teams
- Marcus

- Feb 11
- 6 min read

The current lawsuits against Workday and Eightfold mark a sharp escalation in the debate around AI in HR. And rightly so. Even though only two well-known providers of innovative recruiting solutions are involved, the implications affect every organisation that uses algorithmic systems in recruiting, especially those that have relied too heavily on technology and now bear the responsibility for it.
The mere fact that the lawsuits have been admitted sends a clear and important signal: automated selection does not remove accountability. Not even when an algorithm merely prepares, pre-screens, or scores candidates. This is exactly why these cases matter for employers in the DACH region – not as a US-specific anomaly, but as an early indicator of a development that has long been taking shape through the EU AI Act, the General Data Protection Regulation (GDPR), and anti-discrimination law.
What the lawsuits have in common – and why this is not a vendor-only problem
In the Workday case, the focus is on AI-supported screening. A candidate alleges that the system systematically disadvantaged him. Not because age or origin were explicitly requested, but because so-called proxy criteria were used: educational paths, certain universities, career trajectories, or technological specialisations. Statistically, these attributes correlate strongly with protected characteristics and can therefore lead to indirect discrimination.
What really matters is the legal interpretation. A US federal court allowed the claim to proceed and classified Workday as an “agent” of the employer. This does not remove responsibility from the vendor – but it certainly does not absolve the employer either. Liability is shared, not shifted.
In the Eightfold case, the emphasis is different, but the underlying logic is similar. Here, the core issues are transparency and data protection. The class action alleges that data from third-party sources, such as LinkedIn or GitHub, was aggregated, profiles were created, and suitability scores were generated without adequately informing candidates or giving them meaningful options to correct or challenge the data. The central allegation is opaque profiling without valid consent.
For European employers, this is far from exotic. These exact questions sit at the heart of the GDPR and the broader discussion around automated decision-making in employment contexts.
Why is this particularly sensitive for employers in the DACH region?
In Europe, AI systems used in recruiting are classified as high-risk applications. The EU AI Act requires risk management, transparent information, human oversight, and evidence that systems do not produce discriminatory outcomes – a demanding task for both organisations and system providers. At the same time, national equal treatment laws and labour law safeguards apply.
The key point is simple: employers cannot hide behind their tools.
Anyone who purchases software and integrates it into selection processes is responsible for its impact – legally and reputationally. The US lawsuits merely make this more visible; they do not fundamentally change the underlying reality.
What matters now: a few focused, effective steps
In the short term, the goal is not to ban AI from recruiting. The goal is to regain control and reduce risk.
Create transparency: Candidates must be able to understand if and where algorithmic systems are used. This applies to privacy notices as well as the tone and clarity of the candidate journey.
Safeguard human decision-making: Fully automated rejections without genuine review are an unnecessary risk. Scores are working hypotheses, not verdicts.
Hold vendors accountable: Information on training data, bias testing, and audit mechanisms is no longer a “nice to have” – it is part of due diligence.
These steps can be implemented relatively quickly without paralysing recruiting operations. They will require time and may increase the workload in the short term. That should be anticipated.
Do this now:
1. AI inventory
Create a comprehensive register of all recruiting systems and functions, including “hidden AI” often marketed as “recommendations”, “ranking”, or “smart filtering”. Document:
where automated rankings, scores, or filters are applied
which data sources are used (CVs, application forms, assessments, interview scores, third-party data)
whether sensitive characteristics could be affected directly or indirectly (proxy risk)
2. Update candidate communication
If a system pre-screens or scores candidates, this must be reflected transparently in the candidate journey – clearly, understandably, and with real rights for applicants. The European regulatory context places strong emphasis on these information obligations.
3. “Human in the loop” as real control, not a fig leaf
Avoid automated rejections without review. Define points in the process where recruiters or hiring managers can plausibly review and override scores. If no one can do that, the process is effectively automated – regardless of what the marketing material claims.
4. Put vendors under scrutiny
Request and document testing logic, bias checks, audit approaches, data provenance, explainability, and support for data subject requests. The Workday case shows that vendor liability is realistic – but it will not protect you if you deploy systems blindly.
Medium term: making recruiting governance-ready
Over a 12- to 24-month horizon, operational damage control alone is insufficient. Organisations need a robust structure for managing AI in recruitment.
Clear guardrails defining which use cases are allowed and which are deliberately excluded.
Interdisciplinary governance, with HR, legal, data protection, and employee representation assessing systems together, rather than fixing issues sequentially
Systematic monitoring to detect early on whether certain groups are consistently disadvantaged in the funnel
This is not an innovation blocker. Quite the opposite. Without governance, every new AI feature becomes a potential liability case.
Start with this immediately:
5. Establish AI governance in recruiting
Define permitted use cases (e.g., scheduling, text assistance) versus red lines (e.g., opaque scoring without review). The EU AI Act explicitly pushes organisations towards controlled, well-governed deployments.
6. Bias management and monitoring
Regularly assess whether certain groups consistently underperform in the funnel. Where sensitive attributes cannot be collected, alternative approaches are needed (e.g., sampling, process indicators, qualitative quality reviews).
7. Capability building in the hiring team
Recruiters and hiring managers need to understand that a score is a statistical signal, not a judgment. Without this training, “automation bias” becomes entrenched – and very hard to defend in a dispute.
8. Sharpen requirements
The more “classic” career paths are treated as implicit minimum requirements, the higher the proxy risk. Review which criteria are genuinely job-relevant and which are simply tradition.
9. Transparent candidate journey
Provide contact options, identify responsible parties, and make rejection decisions understandable at a criteria level. In the Eightfold case, Reuters highlights the lack of correction or dispute mechanisms – exactly the kind of issue that escalates conflicts.
10. Be cautious with external data sourcing and profiling
When systems aggregate third-party data and derive scores from it, you immediately enter a zone that requires extensive justification under European law. The Eightfold lawsuit puts this combination front and centre: third-party data, opaque evaluation, and no correction options.
Implications for recruiting practice
Many risks do not stem from sophisticated models, but from weak foundations. Overloaded requirement profiles, historically grown “nice-to-haves”, and unreflective external data sourcing massively increase the likelihood of indirect discrimination.
At the same time, candidates’ willingness to escalate disputes drops noticeably when processes remain understandable. Those who know the general criteria behind decisions and have a named contact person are less likely to litigate – even after a rejection.
Transparency is therefore not only a legal strategy, but an economic one.
Conclusion
The proceedings against Workday and Eightfold are not an attack on technology. They are a reminder that recruiting responsibility cannot be automated. Organisations that use AI must be able to explain what their systems do, why they do it, and where humans intervene.
Those who act now, create transparency, review decision logic, and manage vendors critically will not only reduce legal risk but also improve operational efficiency. They will also build trust. And trust is currently the scarcest resource in recruiting.
Sources
Reuters (21 Jan 2026): Eightfold lawsuit over “secret scoring” / reports, incl. FCRA & California law
Fisher Phillips (26 Jan 2026): Summary of Eightfold allegations, including third-party data and 0–5 scoring
NBC 7 San Diego (Feb 2026): Report on the Eightfold class action (context / affected applicants)
Case tracking: Mobley v. Workday (Clearinghouse)
Seyfarth Shaw (Jul 2024): “Agent” theory and vendor liability in the Workday case
Law and the Workplace (Jun 2025): Workday case update / conditional certification
Hunton Andrews Kurth (Nov 2024): EU AI Act implications for HR (transparency, high-risk use cases)
https://www.hunton.com/insights/legal/the-impact-of-the-eu-ai-act-on-human-resources-activities
Clifford Chance (Aug 2024): What does the EU AI Act mean for employers? (PDF)
EU AI Act – high-risk classification overview




Comments