top of page

AI Interviews in Recruiting: Where Efficiency Ends – and Risk Begins

  • Writer: Marcus
    Marcus
  • 5 days ago
  • 4 min read

Artificial intelligence has moved beyond experimentation in recruiting. Tools like CV parsing, matching algorithms, chatbots, and automated scheduling are now standard. The logical next step is the AI-conducted interview. More vendors now use algorithm-analyzed video interviews to assess candidates' language, word choice, speaking pace, facial expressions, and eye contact.


The central question, however, is not whether this technology can be used, but whether it should be expected to deliver what is often promised. Scientific evidence paints a far more differentiated picture than most product brochures suggest.



The Appeal of Automation – and Its Limits


From an organizational perspective, AI interviews attract interest. They reduce coordination effort, relieve recruiters, and promise consistent evaluations across large applicant pools. Research shows that standardized selection procedures can, in principle, achieve higher reliability than unstructured interviews—particularly when interviewers lack sufficient training.

AI is seen as a way to reduce human bias, but algorithms aren't inherently objective. They reflect biases from their data and design. If the foundation is biased, bias isn't removed; it's amplified.



What Research Says About Bias in AI Interviews


Multiple interdisciplinary studies from computer science, psychology, and work and organizational research arrive at a consistent conclusion:

Algorithmic systems in recruiting are not bias-free; they are particularly vulnerable to indirect discrimination.

Many AI tools evaluate features such as voice pitch, speaking rate, vocabulary, and facial expressions. There is no strong evidence linking these traits to job performance. Yet, they often correlate with gender, age, culture, or neurodiversity.


Research findings include, among others:

  • Language-based models disadvantage individuals with accents, dialects, or non-linear speech patterns.

  • Facial expression and eye-contact analysis are highly culture-dependent and lead to systematically poorer evaluations of non-Western candidates.

  • Age discrimination can arise indirectly because speaking tempo, word choice, and pause behavior are strongly associated with age.


The frequently cited case involving Workday—where applicants allege age-discriminatory effects of algorithmic pre-selection—is therefore not an outlier, but symptomatic of a structural risk inherent in algorithmic recruiting systems.



Validity: The Uncomfortable Gap


Another critical issue is the lack of empirical validation for many AI interview solutions. While traditional assessment methods (e.g., structured interviews, cognitive ability tests) have been studied for decades with respect to predictive validity, the evidence base for AI-driven interviews remains thin.


Independent studies have not yet reached a clear conclusion that AI-based video interviews predict subsequent job performance.


Independent studies have not yet confirmed whether AI-based video interviews more accurately predict job performance than well-structured, human-led interviews. Key findings indicate that, although many vendors report positive internal correlations, these results are often not transparent or reproducible. Professionally justified—regardless of how efficiently it scales.



Candidate Experience: The Overlooked Factor


Beyond fairness and validity, the candidate perspective is receiving increasing attention. Empirical research on applicants' acceptance of AI interviews paints a mixed picture.

Some candidates appreciate the flexibility and standardization. At the same time, many report feelings of intransparency and alienation. Perceptions become particularly negative when it is unclear:


  • that AI is involved,

  • which data are being evaluated,

  • how strongly the results influence the final decision.


Research on organizational justice indicates that fairness is associated with process transparency rather than outcomes. Lack of clarity harms the candidate experience and long-term employer appeal, yet it is often missing from business cases.



Legal and Regulatory Implications


AI interviews must comply with evolving regulatory standards. Data protection and anti-discrimination laws apply equally whether decisions are made by humans or algorithms. The EU AI Act now imposes rules on so-called high-risk systems, including AI systems used for recruitment.


Key requirements include, among others:

  • Traceability of decision logic

  • Documentation of training data and model assumptions

  • Possibilities for human review and correction


Organizations that deploy AI interviews as black-box systems, therefore, expose themselves not only to reputational risk but also to tangible compliance risks.



What Responsible Use Would Require


Research does not fundamentally oppose technology. However, it clearly shows that organizations can only defend AI interviews under strict conditions, including:


  • Clear purpose limitation: AI may support decisions, not make them autonomously.

  • Empirical validation: Evidence that the evaluated features are actually job-relevant.

  • Regular bias audits: Monitoring outcomes by age, gender, origin, and other protected characteristics.

  • Transparent communication: Clear explanations to candidates about how the process works and what role it plays.

  • Human decision authority: Final decisions must remain subject to review and justification.


These requirements are demanding, which often leads to inconsistent implementation in practice.



Technically Possible Does Not Mean Professionally Defensible


AI interviews are powerful, but not neutral. While efficient, they risk new intransparency and discrimination. Research agrees:

Neither fairness nor validity is sufficiently established to justify AI interviews as a full replacement for human selection procedures.

Recruiters face a difficult trade-off: Prioritizing speed over quality can lead to poor decisions—professionally, legally, and reputational. AI can help, but the responsibility for decisions remains human.


Or put differently:

If no one can explain why a person was hired or rejected, that is not progress—it is a step backward with a better interface.


Sources & Further Reading



Comments


Binningen, Schweiz

Abo-Formular

Vielen Dank!

  • LinkedIn
  • Twitter
  • Pinterest
  • Facebook

©2020 Marcus Fischer

bottom of page