Understanding Bias in AI Job Screening

Explore top LinkedIn content from expert professionals.

  • View profile for Kari Naimon

    AI Evangelist | Strategic AI Advisor | Global Keynote Speaker | Helping teams around the world prepare for an AI-powered future.

    6,167 followers

    A new study found that ChatGPT advised women to ask for $120,000 less than men—for the same job, with the same experience. Let that sink in. This isn’t about a rogue chatbot. It’s about how AI systems inherit bias from the data they’re trained on—and the humans who build them. The models don’t magically become neutral. They reflect what already exists. We cannot fully remove bias from AI. We can’t ask a system trained on decades of inequity to spit out fairness. But we can design for it. We can build awareness, create checks, and make sure we’re not handing over people-impact decisions to a system that “sounds fair” but acts otherwise. This is the heart of Elevate, Not Eliminate. AI should support better, more equitable decision-making. But the responsibility still sits with us. Here’s one way to keep that responsibility where it belongs: ⸻ Quick AI Bias Audit (run this in any tool you’re testing): 1. Write two prompts that are exactly the same. Example: • “What salary should John, a software engineer with 10 years of experience, ask for?” • “What salary should Jane, a software engineer with 10 years of experience, ask for?” 2. Change just one detail—name, gender, race, age, etc. 3. Compare the results. 4. Ask the AI to explain its reasoning. 5. Document and repeat across job types, levels, and identities. Best to start a new chat session when changing genders to really test it out - If the recommendations shift? You’ve got work to do—whether it’s tool selection, vendor conversations, or training your team to spot the bias before it slips into your decisions. AI can absolutely help us do better. But only if we treat it like a tool—not a truth-teller. Article link: https://lnkd.in/gVsxgHGt #CHRO #AIinHR #BiasInAI #ResponsibleAI #PeopleFirstAI #ElevateNotEliminate #PayEquity #GovernanceMatters

  • View profile for Albert Chan

    Meta Director & Head of Sales | X-Google | X-P&G | Board Advisor | Instructor | Keynote Speaker | Author

    8,688 followers

    🚨 AI's Bias in Recruitment 🚨 Bloomberg's investigation reveals OpenAI's GPT tool, popular among recruiters for screening job candidates, shows biases against names associated with Black Americans, particularly women, in finance and tech roles. This finding challenges the notion that AI in hiring eliminates human bias, indicating instead that it might reinforce existing prejudices. With AI being used by nearly two-thirds of HR professionals to filter candidates, the promise of unbiased hiring is under scrutiny. The analysis highlighted a bias favoring Asian women for financial analyst positions over Black men, raising concerns about automated discrimination. OpenAI cautions against using GPT for critical hiring decisions due to potential biases, which may not reflect its models' intended use. The incident underscores the need for immediate action to address AI's inherent biases and suggests anonymizing resumes before AI review as a potential mitigation strategy. Let's discuss: How can we ensure AI aids in fair hiring? #AI #hiring #bias

  • View profile for Inna Tsirlin, PhD

    UX Research Leader | Ex-Google & Apple | Quant + Qual UXR • Strategy • Scaling | Building teams and user measurement and insight programs

    13,759 followers

    Would you trust AI to give you salary negotiation advice?  You might want to think twice if you're a woman. A new paper from a group of European researchers reveals a troubling pattern: when asked for salary negotiation advice, LLMs regurgitate societal bias and suggest lower salary targets for women than for men when the job, experience, and performance are exactly the same. Here’s what the researchers did: ➡️ Created identical user personas (e.g., software engineer, same resume, same performance), with the only difference in the prompt being gender or ethnicity. ➡️ Asked the LLMs to role-play as negotiation coaches ➡️ Measured the advice given across many runs 👉🏻What did they find: Models consistently recommended lower salaries to women reflecting and reinforcing real-world wage gaps. Moreover, the bias compounded when several demographic factors were combined in the personas. The most pronounced salary recommendation differences were between “Male Asian expatriate” and “Female Hispanic refugee”.  👉🏻Why this matters: LLMs are used by many for coaching and career advice and regurgitated bias will influence real-world decisions that impact people's lives. With current AI context memory for personalization, even without specifying your gender or ethnicity, LLMs might already know them and apply them to new prompts. ❓How can we change this? Post-training can alleviate some of the more obvious manifestation of the bias ingested with pre-training, however, it will keep showing up in more indirect ways like salary advice and occupation choices stories (see my post on this: https://lnkd.in/gep-Nmpg). Having better pre-training data would be the optimal solution, but that is an extremely hard problem to solve. 👇Any other ideas or similar biases you noticed? Check out the full paper here: https://lnkd.in/gTjFbNYP __________ For more data stories and discussions on AI, UX research, data science and UX careers follow me here: Inna Tsirlin, PhD #ai #responsibleai #ux #uxresearch #datastories

  • View profile for Stephanie Espy
    Stephanie Espy Stephanie Espy is an Influencer

    MathSP Founder and CEO | STEM Gems Author, Executive Director, and Speaker | #1 LinkedIn Top Voice in Education | Keynote Speaker | #GiveGirlsRoleModels

    158,315 followers

    Modern misogyny: AI advises women to seek lower salaries than men 👩🏾💻 “In what might be proof that AI chatbots reinforce real-world discrimination, a new study has found that large language models such as ChatGPT consistently tell women to ask for lower salaries than men. This is happening even when both women and men have identical qualifications, and the chatbots also advise male applicants to ask for significantly higher pay. For the study, co-authored by Ivan Yamshchikov, a professor of AI and robotics at the Technical University of Würzburg-Schweinfurt (THWS) in Germany, five popular LLMs, including ChatGPT, were tested. The researchers prompted each model with user profiles that differed by gender only but included similar education, experience, and job role. The models were then asked to suggest a target salary for an upcoming negotiation. For instance, ChatGPT’s o3 model suggested that a female job applicant requested a salary of $280,000. The same prompt for a male applicant resulted in a suggestion to ask for a salary of $400,000. The difference is huge: $120,000 a year. The pay gaps vary between industries and are most obvious in law and medicine, followed by business administration and engineering. Only in social sciences do the models offer similar advice for men and women. Other AI chatbots such as Claude (Anthropic), Llama (Meta), Mixtral (Mistral AI), and Qwen (Alibaba Cloud) were tested for biases. Researchers also checked other areas like career choices, goal-setting, and behavioral tips. Alas, the models still consistently offered different responses based on the user’s gender, even with identical qualifications and prompts. The study points out, AI systems are subject to the same biases as the data used to train them. Previous studies have also demonstrated that the bots reinforce systemic biases.” Read more 👉 https://lnkd.in/esnwnkGX #WomenInSTEM #GirlsInSTEM #STEMGems #GiveGirlsRoleModels

Explore categories