Skip to content

From Ad-Serving to Gatekeeping: The Evolution of Algorithmic Bias

From Ad-Serving to Gatekeeping: The Evolution of Algorithmic Bias

In 2026, the “Glass Ceiling” has been recoded into a “Black Box.” While the early 2020s focused on passive bias in job ads, today’s risk lies in algorithmic gatekeeping—where Large Language Models (LLMs) and automated tools act as invisible filters before a human recruiter ever sees a resume. Recent 2025 research reveals that AI doesn’t just replicate human bias; it creates complex “intersectional” discrimination that can disadvantage specific groups (e.g., Black men) while appearing to favor others, often bypassing traditional DEI audits. For the retained search industry, the mandate is clear: technology must be a tool for sourcing, but never the final arbiter of talent. Algorithmic integrity is no longer an IT checkbox—it is a core requirement of fiduciary duty in leadership acquisition.


Why Modern AI is the New Glass Ceiling in the C-Suite

When we first observed that Google showed men ads for better jobs, the concern was visibility. Today, the concern is viability. In 2026, the primary threat to diversity in the executive suite is no longer just who sees the job, but the “black box” algorithms that decide who is qualified to be seen by a human recruiter.

According to research from the Brookings Institution (2025), 88% of companies now use AI for candidate screening. However, recent Stanford studies (Oct 2025) found that Large Language Models (LLMs) used in resume screening consistently rated older male candidates higher than female or younger candidates—even when the qualifications were identical.

The Complexity of “Intersectional Bias”

The most alarming discovery of 2025 research is that AI discrimination is rarely a simple “men vs. women” issue. It is intersectional. A May 2025 VoxDev study revealed that GPT-based screening tools actually favored female candidates in some contexts while systematically disadvantaging Black male applicants, giving them scores significantly lower than White males with the same credentials.

5 Ways Technology Discriminates Today (and How to Avoid It)

Method of DiscriminationHow It HappensStrategy to Mitigate
Proxy Variable BiasAI removes “Gender” but flags “Employment Gaps” (penalizing caregivers) or “Lacrosse” (proxy for socioeconomic status/race).Audit for Proxies: Use “Glass Box” AI where the weights of every feature (e.g., tenure, specific schools) are transparent and adjustable.
Linguistic/Accent BiasAI-driven video interviews penalize non-native accents or speech patterns, viewing them as “poor communication” or “low confidence.”Human-in-the-Loop: Never use AI as a “knockout” tool for video interviews. Use it only for transcription and secondary sentiment analysis.
Biased Job Ad GenerationGenerative AI drafts job descriptions using “aggressive” or “masculine” coded language that deters 29.3% of diverse applicants (Jobtrain 2025).Inclusion Scanning: Run all AI-generated drafts through augmented writing tools (like Textio or TapRecruit) to neutralize coded language.
Historical Data EchoesIf your firm historically hired from 5 specific “feeder” companies, the AI will learn that “Success = Company X,” ignoring top talent from elsewhere.Reweight Training Data: Intentionally over-sample successful diverse hires in the training data to counteract historical skews.
Gamified Assessment Bias“Culture fit” games often reward neurotypical, Western-centric problem-solving styles, excluding neurodivergent or global talent.Reweight Training Data: Intentionally oversample successful, diverse hires in the training data to counteract historical biases.

The Strategic Value of the “Human-in-the-Loop”

For a retained executive search firm, your value proposition in 2026 is your ability to override the algorithm. As the EU AI Act (2025) and NYC’s Local Law 144 increasingly mandate “bias audits” and transparency, your clients are looking for partners who can navigate this legal minefield.


Thanks for reading! We welcome your comments. Have you observed algorithmic bias in action? How does it compare to human bias?

Leave a Reply

Your email address will not be published. Required fields are marked *

Krista Bradford

Krista Bradford

Krista Bradford is CEO of the retained executive search firm The Good Search, which is Powered by Intellerati, the executive search lab and AI incubator. A former award-winning television journalist and investigative reporter, Ms. Bradford now pursues truth, justice, and great talent in the executive suite.View Author posts