top of page
Search

Where Is Bias Most Likely to Hurt Women and How Can We Fix It in the Age of AI?

  • One Loud Voice
  • 2 days ago
  • 3 min read

By Mandie Beitner

 

Artificial intelligence is transforming how we hire, heal, learn, and connect.


But even as AI gets smarter, one stubborn truth remains: bias still exists, and it continues to disadvantage women.


During a recent LinkedIn Live I hosted by Claire Roberts (Full Fathom Five) with Karen Blake, Zahra Shah, Birgit Neu and Jacqui Barker, we explored where AI bias hits women hardest, and what can be done to change it.


The question isn’t if bias in AI exists. It’s how we confront it, where it causes the most harm, and how we build better systems from the start.

  

  1. Hiring and Recruitment Systems

 

AI tools often learn from decades of biased hiring data. One example: a now-famous Amazon model that penalized résumés containing the word “women’s.”


Impact: Qualified women can be excluded before human review ever happens. Fix: Train models on diverse datasets, conduct bias audits, and maintain human oversight. Frameworks like IBM’s AI Fairness 360 and Google’s PAIR offer practical tools to detect and correct bias.

  1. Performance Reviews and Promotions

 

AI performance tools often misinterpret patterns of engagement, eg employees who take maternity leave or work flexibly can be tagged as “less committed.”


Impact: Algorithms can reinforce old stereotypes about “ideal workers.” Fix: Combine data insights with human judgment and build context-aware evaluation systems that account for different ways of working.

  1. Healthcare Algorithms

 

Medical AI systems often perform worse for women especially in diagnosing heart disease, chronic pain, or autoimmune disorders because much of their data comes from male patients.


Impact: Misdiagnosis and unequal treatment outcomes, especially for women of colour. Fix: Demand gender-balanced health datasets, inclusive clinical trials, and transparent model testing across diverse populations.

  1. Online Safety and Content Moderation

 

AI moderation tools frequently miss harassment directed at women, particularly in non-English languages or cultural contexts.


Impact: Platforms become unsafe, silencing women’s participation and expression. Fix: Include culturally diverse moderation teams and retrain AI on global, multilingual data to detect nuanced abuse.

  1. The Ethics of Sexual Violence Data

 

Research by Dr. Sarah Wyer has shown that some AI models are trained on datasets containing large volumes of sexual violence content often without consent or ethical oversight.


Impact: Normalising harm and retraumatising users who interact with these systems. Fix: Create global frameworks for ethical data sourcing, require transparency, and build consent-based review protocols.

  1. Representation and Inclusion

 

Professor Sue Black’s research on gender diversity in tech shows that inclusive teams build better, safer, and more effective technology.


Her work on digital inclusion, especially around women entering tech later in life proves that diversity isn’t just good ethics; it’s good engineering.


When women’s experiences are missing from AI design, bias isn’t just repeated - it’s scaled.


How We Start Changing It

 

  • Embed ethics early. Don’t wait for bias to appear, design against it.

  • Empower diverse teams. Representation at every level makes systems fairer.

  • Audit continuously. Fairness isn’t static. Keep testing and evolving.

  • Reform data practices. Push for inclusive, transparent, intersectional data.

  • Centre women’s voices. Lived experience is data. Bring it to the table.


A Personal Reflection


Bias in AI isn’t inevitable, it’s inherited. And that means we can unlearn it.

 

If we want systems that empower rather than exclude, we have to start with intent: ethical design, inclusive data, and diverse teams who see what others might miss.


Conversations like this, featuring leaders such as Karen Blake, Birgit Neu, Zahra Shah, Claire Roberts and Jacqui Barker, and research done by Professor Sue Black, remind me that every one of us plays a role in what AI becomes next.


We can’t afford to wait for fairness to emerge on its own. It never has.


Let’s build it in through our data, our hiring, our leadership, and the way we design technology itself.


Inclusion shouldn’t be an afterthought - it should be our default setting.



If you work in or around AI:


→ Question your data.

→ Challenge your assumptions.

→ Invite more voices in.



Because the future of AI won’t be written by algorithms.

It will be written by us.



What’s your perspective?

Where do you see bias showing up most in AI and what’s one change you’d like to see in 2025?


  

 
 
 

Comments


ONE LOUD VOICE

We are accelerating gender equality in the workplace

#INCLUSION

#EQUALITY

#DIVERSITY

ONE LOUD VOICE FOR WOMEN is a registered Charity no: 1199898. Company number: 11020158.
Registered office address: Third Floor, 20 Old Bailey, London, United Kingdom, EC4M 7AN
WE+ Measure © 2024 is Copyright of One Loud Voice for Women. All rights are reserved, and content may not be copied, adapted, redistributed, or otherwise used without the prior written permission of One Loud Voice for Women.

bottom of page