7 Ways AI Reinforces Inequality Against Women

From bias to deepfakes, AI harms women in structural ways that mirror existing inequalities. Technology built to enhance human potential continues to reinforce gendered power and exclusion.

In mid-October 2025, Sam Altman announced ChatGPT’s new “erotica” features, set to launch in December. In July, Elon Musk’s xAI had already released “Ani”, an anime-inspired AI companion designed for intimate interactions. Welcome to the age of AI-generated intimacy, and we are only getting started.

While tech leaders frame AI companions as harmless entertainment or even solutions to loneliness, feminist scholars and critics expose deeper tensions, biases, and risks underlying these claims: they’re often designed to be female or sexualized, mirror structural inequalities, commodify intimacy, and reinforce harmful gender stereotypes. This reveals a deeper systemic issue: AI isn’t being built with women’s interests, safety, or equality in mind. It’s being built around them, often in ways that objectify, exclude, or actively harm them.

Perhaps that’s why women consistently report higher rates of AI anxiety – feelings of apprehension or fear arising from the rapid development of AI technologies – than men.1 They’re less likely to use AI tools and hold more negative attitudes toward the technology. What might be perceived as technophobia is probably better described as pattern recognition.

While I initially set out to write about AI erotica specifically, my draft version revealed that this phenomenon can’t be understood in isolation. So before we examine AI’s invasion of intimacy, we need to map the terrain: to understand how AI harms women across society, in ways that go far beyond algorithmic bias. Consider this article the foundation for that deeper conversation:

The rise of AI erotica is symptomatic of how AI systems are being designed, deployed, and monetized in ways that systematically disadvantage women. 

From bias in hiring algorithms to deepfake sexual abuse, from exclusion in AI development to erosion of bodily autonomy, here are seven ways in which AI systematically harms women:

1. Algorithmic Bias

One of the most prominent sentences of the AI revolution is: “Artificial intelligence is not neutral”. And we shouldn’t stop reminding ourselves of that. Our world is built on data, and, as Caroline Criado Perez powerfully showed in her book Invisible Women, that data reflects a male-centric view of reality. From urban planning to medical research, the default human has always been male. Now, we are encoding this same fundamental bias into the algorithms that will shape our future. 

Speech recognition systems perform worse with female voices than male voices, because they were trained on male voices predominantly. Medical AI systems might under-diagnose or misdiagnose women because clinical trials and medical data historically underrepresent female physiology.2

AI hiring algorithms, trained on historical data, often perpetuate existing gender biases, leading to fewer job opportunities for women.3 These systems have been shown to penalise female candidates, as seen in Amazon’s scrapped AI recruiting tool, because, guess what, they’ve been trained on male data.4

A staggering 44% of AI systems exhibit gender bias, a direct consequence of being trained on skewed data.5 AI bias against women is a significant problem that continues to negatively affect women’s opportunities, reputation, and treatment in society. Despite some awareness and filtering efforts by AI companies, studies show AI bias against women persists and may sometimes be exacerbated by oversimplified “bias filters” that fail to address deeper structural issues.6 Unfortunately, current progress to close the AI gender gap in the near future is insufficient, the World Economic forum estimates that it will take 100+ years at current rates to reach parity in many sectors influenced by AI. 

Attempts to exclude DEI principles from AI development – for example as regulated through the Executive Order “Preventing Woke AI” issued by the White House in July 2025, threaten these gains and could exacerbate discrimination against women and marginalized groups rather than solve it.

2. The AI Gender Gap

One of the most essential reasons for the prevalence of algorithmic bias towards women, is that women remain underrepresented among AI developers and decision-makers. The gender gap in AI talent remains stark, with only about 22% women globally in AI roles and even fewer in senior positions. 7 Male-dominated development teams create systems that inherently overlook female perspectives and needs. When those building the technology are not representative of its users, the resulting products often perform poorly on issues unique to women, such as healthcare diagnostics or safety solutions (Chong, 2025).

3. Pornographic Deepfakes

According to the European Parliamentary Research Service (EPRS), pornographic material accounts for around 98% of deepfakes.8 An overwhelming 99% of all deepfake pornographic content targets women, with their faces and bodies being inserted into explicit images and videos without consent.9 This form of digital violence, known as non-consensual intimate deepfakes (NCID), inflicts profound humiliation, anxiety, and trauma. It is used for blackmail, harassment, and to systematically damage women’s personal and professional reputations.

For example, female politicians are disproportionately targeted by pornographic deepfakes, with studies showing that nearly one in six women in the U.S. Congress have been victims of such non-consensual AI-generated pornography, aimed at damaging their reputations, silencing their voices, and undermining their political careers worldwide.10  This is a thriving online industry where women’s images are commodified and traded, eroding their bodily autonomy and dignity.

With more powerful and user-friendly AI tools to create videos and images becoming widely available, this increasingly affects young women and teenagers. A survey of 1,200 young people aged 13–20 found that 1 in 8 knew someone who had been targeted with AI-generated nude images.11

→ Related reading: AI as Therapy & Companionhow emotional AI challenges our understanding of connection.

4. The Rise of Erotic AI Companions

Mainstream chatbots, including ChatGPT and Grok, are increasingly offering age-verified “erotica” features, allowing users to generate conversational and visual adult content on demand. These “AI companions” often reinforce harmful gendered stereotypes, portraying virtual women as passive, sexualised, and perpetually compliant. Whilst framed as “treating adults like adults”, the commercial drive to attract paying users risks sidelining critical debates about consent, privacy, and the potential for these tools to normalise the objectification of women and escalate gendered power imbalances in digital spaces.12

We will discuss the long-term implications of sexualised AI chatbots on relationship building in a separate article, noting risks such as emotional dependency, reduced real-world social interaction and unrealistic expectations of intimacy. These AI systems simulate affection without the mutual growth, challenge and accountability that are essential to healthy human relationships. If over-relied upon, they can worsen loneliness and impair social skills.

→ Related reading: AI Inequality: When Intelligence Isn’t Sharedon how unequal access to AI power shapes global opportunity.

5. Gendered Job Displacement

Women are disproportionately affected by AI-driven job loss, as they are overrepresented in administrative, retail, and care roles, the very sectors most susceptible to automation.13 The International Labour Organisation reports that women are nearly three times more likely than men to work in jobs with high exposure to AI automation. In high-income countries, 9.6% of female employment is in the highest-risk category, compared to just 3.5% for men (Chong, 2025). These trends reveal one of the most visible ways of how AI harms women: by eroding economic stability and deepening pre-existing inequalities.

6. Gender Stereotyping in Generative AI

A 2024 UNESCO study revealed alarming evidence of large language models producing content with regressive gender stereotypes, alongside homophobia and racial biases. When prompted to create stories or images, these systems often default to outdated and harmful tropes, reinforcing the very inequalities that society is struggling to overcome. The findings show that in generative AI, women are four times more likely to be depicted in domestic roles, while men are associated with leadership, ambition, and adventure. Open-source models such as Llama 2 and GPT-2 also displayed higher rates of homophobic and racial bias, portraying gay people and ethnic minorities in degrading or stereotypical ways.14

7. The Persistent Digital Divide

Women face systemic barriers to accessing and using AI tools. In developing countries, only 20% of women have internet access, creating a profound digital divide (Chong, 2025). Even where access is available, women often have fewer opportunities for digital literacy and AI-specific training. Recent data by Aldasoro et al. (2024) reveals a notable “gender gap” in the use of generative AI, with 50% of men having used it compared to just 37% of women. The authors suggest that generative AI could lead to greater economic inequality if adopted unevenly.15 This disparity in usage and skills further limits women’s ability to participate in, and shape, the future of AI.

Reimagining a Feminist AI Future

The harms outlined in this article are symptoms of a deeper structural pattern where technology reflects and amplifies existing societal inequalities, and part of the bigger battle of women representation. Every dataset, every design decision, every regulation is a political choice, deciding whose lives are protected and whose are exposed to risk. 

The technology that was meant to enhance human potential is being built upon biased foundations that perpetuate exclusion, objectification, and economic precarity for women.

​​Addressing this is not simply a matter of “fixing the data” or adding fairness filters. It requires a paradigm shift in how we design, govern, and imagine technology itself, one that centres women’s experiences, values, and safety from the very beginning. At present, however, the trajectory points in the opposite direction. The removal of DEI principles from AI policy frameworks signals a regression, not progress. 

Changing course will require confronting the structures of power that built this system, not politely asking for space within it.

Feminist design and governance must challenge the logic of exclusion itself, not adapt to it. Recognising how AI harms women is the first step toward designing technologies that serve peace and equality, rather than reproducing injustice.

There is, however, a glimmer of hope. While women remain underrepresented in AI development, they are increasingly taking space in the movement for ethical AI — from Timnit Gebru, one of the most influential voices in ethical artificial intelligence, pioneering research into algorithmic bias and its societal harms, to Karen Hao’s bestselling investigation into The Empire of OpenAI, and Emily Bender and Alex Hanna exposing the AI con. Tanja Kubes’ Feminist AI Framework (FAIF) or companies like FemAI by Alexandra Wudel exemplify this shift. The critical perspective so often missing from technology is now being championed by the very people it marginalises. And networks such as Women in AI Ethics and The Feminist AI are not only demonstrating what inclusive, justice-oriented technology can look like in practice, they are collectively building power to reshape it.

References

  1. Aldasoro, I., Armantier, O., Doerr, S., Gambacorta, L., & Oliviero, T. (2024). The gen AI gender gap. Economics Letters, 241, 111814–111814. https://doi.org/10.1016/j.econlet.2024.111814
  2. Perez, C. C. (2019). Invisible Women: Exposing Data Bias in a World Designed for Men. ABRAMS.
  3. Swift, J. (2024, July 22). Algorithmic Bias in Job Hiring – Gender Policy Report. Gender Policy Report. https://genderpolicyreport.umn.edu/algorithmic-bias-in-job-hiring/
  4. Dastin, J. (2018, October 11). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
  5. Chong, S. T. (2025, May 25). The AI Gender Trap: Why Women Face Triple the Automation Risk in the Digital Age . UNU Campus Computing Centre. https://c3.unu.edu/blog/the-ai-gender-trap-why-women-face-triple-the-automation-risk-in-the-digital-age
  6. Stanford Report. (2025). Researchers uncover AI bias against older working women. Stanford.edu; Stanford University. https://news.stanford.edu/stories/2025/10/ai-llms-age-bias-older-working-women-research
  7. Whiting, K. (2025, June 25). AMNC25: What to know about AI and the gender gap. World Economic Forum. https://www.weforum.org/stories/2025/06/amnc25-what-to-know-about-ai-and-the-gender-gap/
  8. European Parliamentary Research Service (EPRS). (2025). Briefing: Children and deepfakes. https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/775855/EPRS_BRI(2025)775855_EN.pdf
  9. Security Hero. (2023). 2023 State Of Deepfakes: Realities, Threats, And Impact. Security Hero. https://www.securityhero.io/state-of-deepfakes/
  10. Li, E. R. L., Shultz, B., & Jankowicz, N. (2024, December 11). Deepfake Pornography Goes o Washingon: Measuring the Prevalence of AIGeneraed NonConsensual Intimae Imagery Targeting Congress. The American Sunlight Project. https://www.americansunlight.org/updates/deepfake-pornography-targeting-members-of-congress
  11. Goharian, A., Stroebel, M., Fitz, S., Gudger, S., Jean-Baptiste, A., & Toomey, P. (2025, March 9). Deepfake Nudes & Young People: Navigating a New Frontier in Technology-facilitated Nonconsensual Sexual Abuse and Exploitation. Thorn. https://www.thorn.org/research/library/deepfake-nudes-and-young-people/
  12. Rogers, R. (2025, October 23). ChatGPT’s Horny Era Could Be Its Stickiest Yet. WIRED. https://www.wired.com/story/chatgpt-horny-era/
  13. Collett, C., Neff, G., & Gouvea Gomes, L. (2022). The Effects of AI on the Working Lives of Women The Effects of AI on the Working Lives of Women. https://www.oecd.org/content/dam/oecd/en/publications/reports/2022/03/the-effects-of-ai-on-the-working-lives-of-women_1b627535/14e9b92c-en.pdf
  14. UNESCO. (2024). Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes. Unesco.org. https://www.unesco.org/en/articles/generative-ai-unesco-study-reveals-alarming-evidence-regressive-gender-stereotypes
  15. Aldasoro, I., Armantier, O., Doerr, S., Gambacorta, L., & Oliviero, T. (2024). The gen AI gender gap. Economics Letters, 241, 111814–111814. https://doi.org/10.1016/j.econlet.2024.111814
Picture of Alissa Chmiel

Alissa Chmiel

Alissa Chmiel is the founder of Digital Peace and a PhD candidate researching cognitive resilience in the digital age. In her writing for Digital Peace, she explores the complex intersections of technology, society, democracy, and peace, through a gender-aware and power-sensitive lens. Her work combines critical reflection with a deep curiosity about what it means to remain human in an increasingly digital world.

Join the Discourse

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Your Monthly Brief on Technology, Power & Peace

Technology reshapes conflicts, democracy and humanity in real-time. Are you tracking its impact?

Start tracking technology’s impact on peace and democracy.

I agree to receive monthly newsletters and accept data processing as outlined in the data protection policy.