top of page

AI AND GENDER : TRANSFORMATIVE OPPORTUNITY OR NEW THREAT TO EQUITY IN THE WORKPLACE ? 


The rapid expansion of artificial intelligence (AI) is reshaping labour markets worldwide, yet its implications for gender equality remain deeply contested. This essay offers a critical and intersectional analysis of AI as a socio-technical system embedded in existing structures of power, inequality, and labour market segmentation. Drawing on recent evidence from the International Labour Organization, UNESCO, the OECD, and feminist scholarship, the article demonstrates that AI does not affect workers symmetrically. Women are disproportionately exposed to automation risks in feminised occupations, experience deteriorating job quality through algorithmic management and workplace surveillance, face systematic bias in opaque decision-making systems, and remain underrepresented in STEM and AI development. These dynamics are further compounded by the gendered digital divide, particularly in low- and middle-income countries, where limited access to connectivity and digital skills constrains women’s ability to benefit from AI-driven transformations.


At the same time, the article recognises that AI can expand women’s economic agency through productivity gains, new forms of entrepreneurship, and the emergence of AI-adjacent roles. However, such opportunities are shown to be conditional upon inclusive digital infrastructure, access to skills and finance, supportive care policies, and robust labour protections. The analysis argues that the central challenge posed by AI is therefore political rather than technological. Without deliberate intervention, AI is likely to amplify existing inequalities, making them faster, less visible, and harder to contest. The article concludes by proposing a feminist governance framework for AI that emphasises intersectional algorithmic auditing, democratic participation in AI governance, just transition policies for high-risk occupations, and the redistribution of care as a prerequisite for digital inclusion. Through this lens, AI emerges not as an inevitable force, but as a site of collective choice with significant implications for gender equality and social justice in the future of work.


Keywords: AI and gender; feminist AI governance; future of work; algorithmic bias; job quality; digital divide; women in STEM; care economy; intersectionality; Global South; just transition


0. STARTER POINT. 

The rapid expansion of artificial intelligence (AI) is transforming the world of work, but its effects are neither neutral nor equally distributed. Drawing on recent International Labour Organization (ILO) research, studies on algorithmic bias, evidence on STEM gender gaps, and cultural analyses of language model behaviour, this article offers a critical and intersectional reading of AI. It argues that AI not only reproduces existing inequalities but amplifies them through opaque systems, unequal digital infrastructures, and market dynamics that concentrate technological power. Without feminist and multilateral governance, the promise of AI as a democratising force in the labour market will remain unrealised. The article concludes by proposing a policy framework aimed at transforming AI into an instrument for social justice.


ree

1.- INTRODUCTION : AI AS A SITE OF SOCIAL CONTESTATION.

Public debate on AI is dominated by utopian or dystopian narratives that dilute the complexity of the phenomenon. As Verick (2025) warns, discussions frequently revolve around job creation or destruction, while overlooking crucial dimensions such as job quality, power dynamics in the workplace, and the reproduction of structural inequalities. 


AI does not emerge in a vacuum, it operates within a system in which women already face entrenched inequalities ; occupational segregation, the  digital divide, biased training data, the unequal burden of care work and exclusion from STEM fields. From a critical feminist  perspective, the central question is not whether AI will create opportunities but who has the power to define what counts as an opportunity, who bears the costs, and how power is redistributed through algorithmic management and technological infrastructures. 


2.- UNEQUAL EFFECTS : HOW AI TRANSFORMS WORK ASYSMETRICALLY. 


2-1 ILO evidence: global patterns of female overexposure. 

Recent evidence from the ILO working paper Generative AI and Jobs : A Refined Global Index of Occupational Exposure ( Gmyrek, 2025) confirms that the impact of generative AI on employment is profoundly gendered. The study constructs an “exposure gradient” with four levels, distinguishing between occupations where generative AI can automate only a limited set of tasks and those  where a large share of core tasks is potentially performable by AI. 

Globally, around one in four jobs falls into categories with significant exposure to generative AI, but this exposure is not evenly distributed between women and men. Across almost all world regions, women make up the largest share of employment in Gradient 4, the group of occupations most exposed to potential task automation, particularly clerical, administrative  and certain services roles (International Labour Organization).

The gender asymmetry becomes even more striking in high-income countries. The ILO estimates that jobs at the highest risk of transformation or partial automation by generative AI account for about 9.6 per cent of female employment, compared with 3,5 per cent of male employment (Reuters).This pattern reflects the concentration of women in occupations such as secretarial work, data entry, office support and routine customer interacts with a labour market that has long channelled women into feminised, undervalued and highly standardised tasks. The result is a form of cumulative disadvantage : historical gender segregation in employment becomes the basis on which AI now identifies “ automable” words.  


This configuration is not incidental : it is the outcome of decades of occupation segregation and the systematic undervaluation of “ Women ‘s work” Research by the European Institute for Gender Equality and others shows that women remain overrepresented in managerial, technical and decision-making positions that are less exposed to direct automation (European Institute for Gender Equality). These feminised roles are often characterised by low bargaining power, limited progression prospects and tightly standardized products, exactly the type of work that lends itself to codification and , therefore, to substitution or intensification through AI. In other  words, AI nos not creating a new vulnerability for women so much as exploiting and magnifying an old one. 

Moreover, the ILO’s findings intersect with a broader body of research on gender, skills and technological change. Studies on AI and gender employment point up.


Moreover, the ILO’s findings intersect with a broader body of research on gender, skills and technological change. Studies on AI and gender employment point out that women in low and medium skilled roles face significant barrier to moving into AI-complementary 

jobs, including unequal access to training, time constraints linked to unpaid care work, and persistent gender stereotypes about who is “tech-saw” (OECD), OECD analysis similarly suggests that women are less likely than men to use advanced AI tools even within the same occupation, which may further widen gaps in productivity, pay and promotion over time (OECD) As generative AI reshapes clerical and administrative occupations. By relocating tasks, reducing headcount or downgrading roles, women risk experiencing not only job loss.but also deteriorating job quality, wage stagnation and heightened precarity in the positions that remain. 

Taken together, the ILO evidence shows that women’s “ overexposure “ to AI is structurally produced. It arises from the intersection of where women work ( in highly exposed occupations ), how those jobs are valued (as low-status, routine and easily replaceable), and what opportunities exist to move out of them (often limited and  unevenly distributed) .In this sense, generative AI does not simply threaten certain categories of employment ; it threatens to reinforce and accelerate pre-existing gender hierarchies in the labour market, unless policy interventions, social dialogue and targeted investment in skills explicitly address these asymmetries.


2.2. Jobs quality as a critical dimension

Verick (2025) argues that concentrating solely on the number of jobs created or displaced by artificial intelligence offers a narrow and ultimately  insufficient perspective on its impact.  Such a focus obscures the deeper structural transformations taking place across the world of work. According to Verick, the more consequential effects of AI emerge in areas such as dynamics, workers’ autonomy, the expansion of monitoring and surveillance mechanisms, the intensification of work processes and the erosion of key labour rights. 

These pressures are not distributed evenly across the workforce, Verick highlights that women and younger workers are disproportionately affected, and they are more likely to be concentrated on precarious forms of employment or in subordinate occupational positions. 

In such contexts, the introduction of AI systems can heighten existing vulnerabilities, widening pay disparities, and reducing decision-making power and  exposing workers to more intrusive and automated forms of oversight. 

Consequently, the debate between AI and employment must shift  beyond a simple tally of jobs gained or lost . The central question is how technological change restructures working conditions, redistributes power within workplaces, and determines who bears the costs and who reaps the benefits of innovation. Only through this broader analytical lens can we accurately assess the implications of Ai for gender equality and for social justice more widely.


3.- ALGORITHMS, BIAS AND ILLUSION OF NEUTRALITY


3.1 Algorithms as reproduction of inequality 

Despite widespread claims that AI systems operate objectively, evidence  consistently shows that algorithms reflect the social and historical contexts in which they are created . Joy Buolamwini’swell.known TED Talk illustrates this vividly : widely used facial recognition models perform significantly worse when identifying women and racialised individuals. This disparity  is not a trivial technical limitation but a symptom of deeper structural exclusions within the datasets and  design processes that underpin contemporary AI.

When such systems are deployed in the workplace, their capacity to reproduce inequality becomes far more consequential. Algorithms used for hiring may systematically downgrade V’s associated with women’s career trajectories, undervalue non-linear work histories or misinterpret linguistic cues more common among racialised groups. In performance evaluation, automated scoring systems may penalise workers whose communication cities deviate from Western , male coded norms of assertiveness or emotional expression. In workplace monitoring, surveillance technologies may misclassify facial expressions or behaviour triggering punitive interventions that disproportionately affect already vulnerable workers. 


Crucially, algorithmic bias is rarely visible to those subject to it. Decision-making processes are embedded in opaque models shielded by commercial confidentiality. This opacity makes discrimination, less perceptible, less contestable, and more entrenched than under traditional managerial practices. What one might have experienced as subjective unfairness  becomes justified as “ data-driven efficiency“, granting discriminatory outcomes  misleading aura of scientific legitimacy. 

In effect, algorithms do not simply reflect inequality, they can institutionalise and accelerate it by embedding historically biased assumptions into everyday workplace decisions. Without deliberate corrective measures, AI becomes a powerful vector for amplifying pre-existing social hierarchies.  


3.2. The epistemological problem : Which humans do models represent ? 

A central epistemological question in AI development concerns the human baseline against which these systems are calibrated. The Harvard study Which Humans ? (2025) demonstrates that large language models (LLMs) most closely emulate individuals from WEIRD societies, Western, Educated, Industrialised, Rich and Democratic. This finding has profound implications for global labour markets. LLMs encode behavioural  expectations, communication styles and normatives assumptions that reflect WEIRD cultural patterns, yet these patterns are mistakenly  treated as universal. This created three interconned  risks :   


A.-The universalisation of Western workplace norms. 

AI systems used in recruitment, training or performance assessment may implicitly prioritise direct communication, self- promotion or linear career narratives behaviours  culturally aligned with Western professional contexts. When these norms are  automated, organisations risk enforcing a homogenised, culturally narrow definition of “ competence”. 


B.- Disproportionate penalties for women and non-western workers. 

Women in many cultures may employ more collaborative or context-sensitive communication styles, while non-Western communities may emphasise indirectness, collectivism or relational form of interaction. When AI evaluates workers  according to WEIRD linguistic and behavioural patterns, divergence from these norms can be misinterpreted as lower confidence, weaker leadership potential or reduced productivity. The result is a technologically mediated form of cultural discrimination  


C.- The erosion of diversity through algorithmic standardisation 

Rather than accommodating varied cultural approaches to work. AI  tends to flatten them. LLms trained on dominant groups inadvertently  enforce a narrow worldview that is neither neutral nor representative, privileging specific regions, genders and socioeconomic strata. Over time, this risks reinforcing  cultural hierarchies and undermining pluralism within the workforce. 

In short, the epistemic foundation of AI systems are shaped by a limited subset of humanity. By generalising from WEIRD populations to global labour markets, AI produces assessments that may be systematically misaligned with the lived experiences, values and communication practices of many workers. This gap between representation and  reality is not merely  technical, it is political with real consequences for equality, recognition and inclusion in the workplace. 


3.3. The systemic risks of deploying biased AI in the workplace.The integration of algorithmic systems into workplace governance, ranging from recruitment and  promotion processes to performance management and disciplinary decisions, creates structural risks, biases embedded in AI do not merely reflect existing inequalities but can scale them, embedding discrimination into routine  organisational procedures with  unprecedented reach and efficiency.                                                                    

First algorithmic decision-making transforms what were one discretionary and contestable managerial judgements into outputs that appear objective. data-driven, and therefore legitimate. This veneer of neutrality discourages scrutiny and reduces workers’ capacity to challenge discriminatory outcomes. When an algorithm flags an individual as  “low performing”  or  “unsuitable”, the opacity of underlying models means affected workers often lack the evidence needed to dispute such assessments.  

Secondly, bias in workplace AI systems is rarely the  product of intentional discrimination; rather, it arises from training data and behavioural assumptions that reproduce historical inequalities. If datasets capture labour markets in which women, younger workers, or racialised groups have historically earned lower wages or occupied subordinate roles, models may come to treat these patterns as normative. As a result, AI-driven systems may undervalue communication styles, competencies or leadership behaviours that deviate from these entrenched norms. This creates a  feedback loop in which past discrimination becomes a predictor of future potential. 

Thirdly, the intensification of workplace surveillance through algorithmic tools compounds these risks. Monitoring technologies, ranging from keystroke tracking to emotion.recognition systems ,  often rely on models trained on narrow demographics datasets. Misclassification does not simply lead to errors, it can trigger disciplinary action, reduce autonomy or justify punitive productivity targets. These systems disproportionately affect workers already positioned at the margins, women in lower-paid service jobs, migrants, gig workers and those whose work is fragmented  or undervalued.   


Finally, the adoption of biased AI introduces governance challenges  for labour institutions. Traditional mechanisms of oversight, collective bargaining, labour inspection, individual grievances, are ill equipped to audit opaque systems or secure access to proprietary algorithms. As AI becomes embedded in  everyday organisations decisions, inequality becomes increasingly  institutionalised in ways that are technically complex and legally ambiguous. 

In sum, biased AI systems in the workplace do not simply mirror social inequality , they possess the capacity to codify , automate and legitimise it.

Without  robust regulatory frameworks, transparency obligations, and meaningful  workers participation in AI governance, the deployment of algorithmic  tools risks entrenching a technologically enhanced form of structural discrimination across labour markets. 


ree


4.- STEM  GENDER GAPS : THE STRUCTURAL ROOTS OF TECHNOLOGICAL INEQUALITY 

Unesco’s 2024 report Changing the Equation : Securing STEM Futures for Women shows that gender gaps in STEM are persistent and systemic. Women make up only  around one third of the global scientific community,  and in G20 countries they hold just 22% of STEM jobs (unesco.org)  Despite major progress in overall educational attainment, the share of women among STEM graduates has stagnated at roughly 35% worldwide for at least a decade. (unesco.org . OECD data pint to a similar picture :in 2021 women constituted 32,5 % of STEM graduates across OECD countries, with strong concentration in natural sciences but only 27, 8 % in engineering and 22, 7 % in information technology  (OECD).

These patterns are not explained by innate differences in ability. UNESCO, OECD and a wide body of academic research converge on the conclusion that gaps are produced by structural factors: gendered socialisation, unequal access to digital resources, biased learning environments and discriminatory workplace cultures. (unesdoc.unesco.org) . In other words,  the issue is not women’s capabilities or interest, but an ecosystem that has not been designed for their full participation. 


4.1 Gendered socialisation and the early production of inequality.

Large-scale international assessments show that girls perform as well as or better than boys in science in many countries, yet are less likely to see themselves in future STEM careers . OECD analysis of PISA data finds that in 67 countries, girls often match or outperform boys in science , but still report lower confidence in their own abilities and are less likely to expect  to work in science or engineering occupations as adults (OECD, 2019)  

Developmental psychology research has documented that by the age of six, many girls have already absorbed the stereotype that high level intellectual ability is more typical of boys, which affects their interest in “brilliant“-coded fields such as physics or computing (PMC). UNESCO’s earlier Cracking the Code work and subsequent syntheses show that gender norms conveyed by families, teachers, peers and media systematically link girls with care-oriented and relational roles, while boys are encouraged towards technical experimentation and risk-taking (unesdoc.unesco.org ,2025). As a result, the STEM “ pipeline” narrows well before higher education. The underrepresentation of women in STEM is therefore not the outcome of lower performance, but of social expectations and self-selection shaped by early gendered socialisation. 


4.2 The Absence of Female Role Models. 

UNESCO  estimates that women represent roughly one third of researchers worldwide, with shares below 10% in some countries .(unesco.org) This underrepresentation extends into senior positions, women are significantly less likely to occupy professorial, principal investigator or R&D leadership roles. especially  in engineering, computer science and physics. (PHYSICS TODAY, 2025)

Empirical studies link this lack of representation to educational and career choices. Experimental and longitudinal research finds that exposure to female scientists and engineers increases girls’ likelihood of aspiring to STEM careers, improves their sense of belonging in STEM  environments and can mitigate stereotype threat (PMC, 2018). Where such role models are absent, young women find it harder to see well-documented  “leaky pipeline”, whereby women leave STEM at higher rates at each transition point, from secondary to tertiary education, from university early career, and from mind-career to leadership (Enlighten Publications, 2017)


4.3. Bias embedded in educational content and practices. 

Analyses of curricula and textbooks across regions reveal that women scientists, engineers and innovators are systematically underrepresented in teaching materials, and when they do appear, it is often in marginal or stereotyped roles (unesdoc.unesco.org 2017). This misrepresentation reinforces the perception that scientific discovery is predominantly male, and obscures the contributions of women and of researchers from the Global South.  Research on classroom dynamics and assessment practices suggests that STEM teaching frequently rewards competitive, individualistic learning styles and teacher- student interactions that align more closely with male-coded participation norms. Studies across Europe, Latin America and Asia show that teachers are more likely to attribute their success to ability and girls ‘ success to effort and unconsciously steer girls towards “ applied” or care-related fields (ResearchGate, 2015).

UNESCO and OECD both conclude that these factors , content that erases women’s contributions and classroom practices shaped by gender stereotypes, create educational environments in which girls and women may feel peripheral, even when  their performance is strong (unesdoc.unesco.org)


4.4. Discrimination and the persistence of masculinised workplace cultures. 

The structural barriers do not end at graduation. In the labour market, women remain disadvantaged in recruitment, pay and progression in STEM and tech sectors. UNESCO’s 2024 STEM report finds in G20 countries women account for about 35% of STEM graduates but only 22% of the STEM workforce, indicating significant attrition at the point of labour market entry and in early career stages. (PHYSICS TODAY, 2025) Studies of organisational cultures in technology-intensive sector document patterns of gender pay gaps, glass ceiling and exclusionary norms, including expectations of long hours, informal networking in male -dominated spaces and tolerance of harassment or microaggressions  (Arno, 2025) These dynamics are more acute for women who face intersecting forms of discrimination, such as women racialist, migrants and women from lower income backgrounds. 

International telecommunications and digital inclusion reports add a further layer : women are less likely than men to use digital technologies and advanced tools, particularly in low income settings. The ITU, for example, reports a persistent global gender gap in internet use, with the divide considerably wider in least developed countries (ITU, 2023). The limits women’s opportunities to acquire the very digital and AI related skills that are increasingly required in STEM workplaces reinforcing the cycle of underrepresentation and slower career progression.


4.5  Consequences for AI development and technological governance. 

These STEM gender gaps translate directly into skewed participation in AI. According to estimates from the World Economic Forum and UN Agencies. Women make up only about 20-22% of AI professionals globally, and a similarly small share of AI researchers (ITU, 2024) . Women are also underrepresented in leadership positions in AI start-ups , corporate labs and national AI governance bodies (PHYSICS TODAY, 2025)

This underrepresentation has three empirically  grounded implications. 


1.- Narrow epistemic and data frameworks.

AI models encode the assumptions and priorities of those who design and deploy them. With me, often from relatively privileged and homogeneous backgrounds. constituting the majority of AI developers,  datasets, and problem framings are more likely to reflect their experiences and blind spots. Studies of algorithmic bias in areas such as facial recognition, hiring tools and content moderation have shown that systems trained on non.diverse data can systematically misclassify or disadvantage women and other marginalized groups. (unesco.org, 2022)


2.-  Limited diversity in problem-solving  and risk assessment.

Research on innovation and team composition consistently finds that diverse groups produce more robust and socially responsive technologies  (ResearchGate, 2025) In AI the lack of gender diversity means that critical questions, about care responsibilities, safety, harassment reproductive rights, or informal and precarious work, may be under-prioritised in model design, evaluation metrics and policy discussions. 

3.- Reproduction of dominant masculine organisational logics. 

Evidence from organisational studies highlights how cultures that valorise speed, disruption and hyper-competitiveness tend to marginalise  concerns around equity, wellbeing and long-term social impact (Arno, 2025) .When AI is developed primarily ins such environments, systems are more likely to be optimised around efficiency, monitoring and control, rather that autonomy, care or participatory governance. 


Taken together, the data indicate that STEM gender gaps are not peripheral to AI,  they are structurally constitutive of how AI is imagined, built and governed. They shape who sits at the table, which problems are addressed, which trade-offs are considered acceptable, and whose interests are embedded in technological systems. Unless the structural barriers that keep women out of STEM, and especially out of AI related roles, are addressed, technological inequality will continue to be reproduced in the very architecture of AI. 


5.- INTERSECTIONS OF THE DIGITAL DIVIDE AND GENDER INEQUALITY 

ILO research on generative AI and jobs sits against a broader structural backdrop :  the global divide. In 2024, the international Telecommunication Union ITU estimated  that 93 % of people in high-income countries use the internet, compared with only 27% in low-income countries ( ITU+, 20241. This stark asymmetry in connectivity mirrors wider inequalities in infrastructure, affordability and digital skills, and shapes who can benefit from AI-enabled transformations of work. 

Building on this, Verick ( 2025) highlights that digital exclusion is not simply a question of “ access to technology” by  a key channel  through which technological change reproduces and deepens existing  labour market inequalities.  When this structural divide is combined  with gendered patterns of access and use, the result is a highly unequal landscape of exposure to automation and opportunity for digital upgrading.  

5.1 A geographically  uneven digital infrastructure.

The ITU’s Facts and Figures 2024 show that while global  internet use continues to grow, connectivity remains closely correlated with national income levels. In high-income  economies, almost universal  internet use coexists with widespread broadband and %G coverage : in low.income countries, just  over one quarter of the population is online and only a small minority has access to high-speed networks.          (ITU+2ITU+2, 2024 )

These structural gas have direct implications for labour markets and AI

  • Workers in high-income contexts are far more likely to use computers and internet -based tools as part of their everyday work. 

  • In low-income countries

  • In low-income countries, large shares of the workforce remain in offline or minimally digitised segments of the economy, limiting both the immediate spread of AI-driven automation and the potential for AI-driven productivity gains.

ILO analysis of generative AI emphasises that the highest exposure to AI.amenable tasks occurs where digitalisation is already advanced, with clerical and knowledge-intensive occupation most affected, International Labour Organization+ 20231  This mean that countries already better connected, overwhelmingly in the Global North, are those where AI based augmentation and automation will initially hit hardest, but also where the capacity to reap productivity benefits is greatest. 


5.2. A gendered digital divide in developing countries. 

Within this  unequal global field, gender gaps in digital access are persistent and well-documented. A large body of research on the “gender digital divide “ is developing countries shows that women are less likely than men to own devices, to have internet access and to use digital tools even when they are are available (MDPI+1, 2014)

Recent data from the GSMA’s Mobile Gender Gap work indicate that across low-and-middle income countries, women are around 14-15% less likely than men to use mobile internet, resulting in hundreds of millions fewer women online than men. (gsma.com+2gsma.com+2 , 2025 ) . UN Women similarity reports that women and girls face multiple barriers to digital access, including cost , safety concerns, social norms,  restricting technology use and lower levels of digital literacy, which significantly constrain their participation in the digital economy. omen.org+2unwomen.org+2, 2023)

Crucially, these are not simple “ access" gaps : they  translate into differences in how technologies are used. Even when households have connectivity, women are more likely to have intermittent or supervised access, to use shared devices, or to prioritise other’s use over their own. This limits the time and autonomy required to develop advanced digital skills and to experiment with new tools such as AI systems. 


5.3. Differential exposure to AI : the “ double bind “ 

ILO research on  generative AI exposes a paradox at the intersection of digital and gender inequalities. In its global analysis of occupational exposure, Gmyrek Berg and Bescond (2023) and subsequent work by Gmyrek et at(2025) show that women, particularly in developing countries, often have  lower rates of computer use at work that men, both for tasks likely  to be automated and for those that could be augmented by AI. (International Labour Organization+2International Labour Organization+2 , 2023)

This creates a double bind : 


1.- Lower digital exposure reduces immediate automation risk. 

Where women’s jobs are less digitised, for example, in informal services, agriculture or low-tech manufacturing- generative AI has fewer direct channels though which automate tasks in the short term. In statistical terms, their measured “ exposure “ to generative Ai is lower.

2.- But lower access drastically reduces long-term opportunity.

Because these same workers have limited access to computers, connectivity and digital training, they are also less able to benefit from AI-enabled augmentation, online learning, remote work, or digital entrepreneurship. Over time, this can lock women into low.productivity, low wage segments of the economy, even as other workers, often men in more connected sectors, see their productivity and earnings enhanced by AI. 

From a critical standpoint, what appears  as “ protection” from automation is,  in fact, exclusion from the frontier of technological change. The risk is that women in poorly connected contexts will not only miss out on new forms of work, but will be left further behind productivity differentials between digital and non-digital segments of the economy widen. 


5.4 The digital divide as a mechanism for reproducing inequality.

The digital divide is therefore not a neutral backdrop: it is a key mechanism though technological transitions reproduce and deepen existing inequalities. As Antonio and Tuffey ( 2014) argue, the digital divide is milti.dimensional . encompassing material access, skills usage and motivation. and is tightly  interwoven with broader  structures of class, gender and geography (MDPI+1, 2014)


When we combine:


  • Global connectivity gaps ( low income countries at 27% internet use versus 93% in high.income economies  (ITU+1, 2017)

  • Gender gaps in device and mobile internet use in low and middle income countries ( gsma.com+2gsma.com+2, 2024)

  • Occupational gender segregation and unequal computer use at work documented in ILO AI-exposures studies , International Labour Organization, , 2023) 

a consistent pattern emerges : 


  • Women in the Global South are less visible in the data on which AI systems are trained. 

  • They are less likely to work in digitised occupations where AI is being rapidly deployed 

  • They have fewer opportunities to acquire AI complementary skills to  move into higher productivity, AI-eneblad roles. 


In contrast, workers disproportionately men in better connected, higher income settings, are overrepresented in the segments of the global labour market where AI experimentation, investment and productivity gains are concentrated. 

From this perspective, the digital divide is not merely a technological lag :it is a structural filter that determines whose work is augmented, whose work is automated and whose work is simply overlooked. WIthout deliberated policies to expand affordable connectivity connectivity, address gendered barriers to digital access and invest in inclusive digital skills , AI risks consolidation of a stratified world of work in which gender and geography jointly determine exposure to risk and access to opportunity. 


6.- OPPORTUNITIES : REAL BUT CONDITIONAL 

AI can expand women’s economic agency at work, but only where the enabling conditions counter the structural asymmetries mapped in sections 2.5 (occupational segregation, job quality risks, biased systems, STEM exclusion, and the gendered digital divide ) in other words, Ai can become an equaliser only if the transition is governed, otherwise it tends to reward those who already have connectivity, time, capital and institutional power. 


6.1. AI as “ capability Amplification” not just automation. 

A core opportunity is that generative AI can augment (not merely replace) human works by lowering in the time and skills thresholds for high value tasks, drafting, summarising, translating, data- cleaning , customer communication, basic coding and content production.

Experimental evidence synthesised by the OECD shows consistent short.term productivity gains in tasks clusters that resemble everyday  knowledge work (writing, editing , translation , routine analysis ), which matter because many women are concentrated in roles where augmentation could raise bargaining power if job design  and evaluation systems recognise the added value rather  than identifying workloads. (OECD, 2025 )

Equity implication : augmentation becomes transformative when organisations use AI to recompose jobs upward ( more judgement, coordination, creativity ) rather than using it to compress headcount, freeze wages, or increase surveillance, linking directly to the job . quality, warning in section 2.2. The ILO explicitly frames this as a governance choice, the same tools that boost efficiency can also undermine autonomy if deployed through tight monitoring and unilateral management control.  (International Labour Organization)

Equity implication: augmentation becomes transformative when organisations use AI to recompose jobs upward (more judgement, coordination, creativity) rather than using it to compress headcount, freeze wages, or increase surveillance—linking directly to the job-quality warnings in section 2.2. The ILO explicitly frames this as a governance choice: the same tools that boost efficiency can also undermine autonomy if deployed through tight monitoring and unilateral management control. (International Labour Organization, 2023)


6. 2 Women-led entrepreneurship, market access , and SME scaling. 

For women entrepreneurs, especially in micro and small enterprises, AI can reduce barriers that have  historically been costly in time, money and specialised expertise : 

  • Lower-cost business functions : marketing copy, product descriptions, basic branding, bookkeeping templates, customer support scripts and multilingual communications. 

  • Faster formalisation and compliance : drafting invoices, contracts ( with legal review) tenders, policy documents, and grants applications. 

  • Expanded market reach : better online storefronts, localisations for export and improved responsiveness to customers across time zones. 

OECD survey evidence on SMEs indicates that generative AI is already being used by a substantial share of SMEs and is reported to improve performance and help compensate for skills gaps., suggesting genuine scope for diffusion beyond large firms if the right support ecosystems exist (OECD, 2024)

Crucially, women entrepreneurs in low and middle income countries face a stack of constraints ( device access, safety payments, capital, time ) Large cross. country research focused on women entrepreneurs shows that those with reliable internet access are more likely to adopt AI tools and can translate digital capability into business growth, but the benefits are uneven  where connectivity, safety and finance are fragile (Cherie Blair Foundation for Women, 2024 )


6.3 Pathways into new roles and AI.Adjacent labour demand. 

Even with STEM gaps ( section 4 ), AI expands demand for roles that do not require advanced engineering credentials but do require  domain knowledge and critical judgement , for example :

  • AI operations and quality roles : content moderation, model testing, prompt design for specific workflows , documentation and evaluation support. 

  • Human–centred service redesign : integrating AI into health education, public services, HP and legal aid with a strong user focus.

  • Governance, compliance and risk. ; auditing, impact assessment, data protection, workplace consultation and bias testing, areas increasingly emphasised by multilateral organisations. 

These roles are realistic bridges if women have recognised credentials, paid time for learning and protection from discriminatory hiring filters ( section 3), UN Women’s recent work on gender-responsive AI stress that inclusion requires not only access, but institutional practises that make AI safe, accountable and beneficial, especially  in private-sector deployment  (UN Women,,2025)


6.4. The “conditions” that determine whether opportunity becomes justice.

To prevent section 6 from becoming a rhetorical add-on to structural inequality, opportunities  must be locked to enabling conditions that directly answer earlier risks : 


  1. Skills that are accessible in real lids (time, cost, care) 

Reskilling cannot assume “spare time” in context where unpaid care burdens are gendered. The ILO’s policy framing on GenAI stresses training and transition management, but this only becomes gender-just when training is paid, modular are paired with workers voice and protections against job degradation (International Labour Organization, 2025)


  1. Access to capital and digital payments for women-led firms. 

AI tools do little if women entrepreneurs cannot finance devices, connectivity or growth.The World Bank’s Gender Strategy ( 2024-2030), and related research emphasise  intentional investment and persistent financing gaps affecting women and women-led business, this is prerequisite layer for any “ AI entrepreneurship “ narrative (Banco Mundial, 2025)


  1. Infrastructure and inclusion  by design.

Opportunities scale only where connectivity is affordable and reliable and where IA tools works in local languages and contexts, otherwise the “ WEIRD default” problem ( sections 3.2) quietly reappears as a barrier to effective use and fair evaluation

  1. Governance that protects workers while enabling innovation.

Opportunity requires enforceable rules ; transparency,  explainability where decisions affect employment, rights to contest automated outcomes and collective bargaining capacity to negotiate deployment ( linking to section 3,3’s institutional challenges ) The ILO’s approach foregrounds transition governance and social dialogue precisely because outcomes are not technologically predetermined ( (International Labour Organization).


  1. Care policies as economic infrastructure.

AI is often framed as the “ future of work “, but women’s work is also structured by care systems. UN documentation on UN Women’s programming underscores that transforming care systems and financing gender equality is treated as a strategic lever, without it. Women ‘s ability to take up AI enabled opportunities is structurally constrained . (docs.un.org, 2025).


ree

7. THREAT or OPPORTUNITY ? AI AS A MIRROR AND AMPLIFIER OF INEQUALITY.

Framing AI as  a simple “ threat or opportunity” obscures what your earlier sections already imply. AI is best understood as a multiplier of existing labour-market structures. Where work is already gender. segregated, undervalued, monitored or informally organised . AI tends to scale those conditions-faster, farther and with an appearance of neutrality (Gmyrek, 2025; ILO, 2024, 2025a). (International Labour Organization).


7.1. Reinforcing wage gaps through task revaluation and unequal adoption. 

AI can widen gender pay  inequality even without mass unemployment. The mechanism is subtier , task revaluation. When core tasks in feminised clerical and support roles are automated or  “ AI-assisted “ , organisations may reclassify jobs, compress pay scales, or reduce progression ladders . While productivity gains accrue to roles with more bargaining power or complementary skills  (Gmyrek, 2025). (econstor.eu).

At the same time, emerging casual evidence suggests gender gaps in AI adoption and use. can translate into unequal productivity gains  inside the same occupation, creating a new channel for pay and promotion divergence if employers reward output but do not equalise access to tools and training ( (Carvajal et al., 2024). (parisschoolofeconomics.eu)


The empirical picture on wage inequality at macro level is mixed. OECD analysis of 2014- 2018 data finds no clear evidence ( so far) that AI changed inequality between high. and low wage occupations though it reports some indications of lower  within-occupation inequality in more AI.exposed jobs (OECD, 2024a, 2024b). (OECD) This does not contradict the gender argument ; gender inequality can worsen  within and across form through job design, evaluation and adoption gaps even when aggregate wage dispersion does not shift dramatically in early periods. 


7.2. Intensifying workplace surveillance and shrinking autonomy.  

A central amplifier effect is the spread of algorithmic management, systems that allocate tasks, track behaviour, evaluate performance, and discipline workers at scale. The ILo explicitly defines algorithmic management as data driven organisation and monitoring of work, and warns that it can transform power relations by shifting discretion from workers ( and sometimes supervisors) to opaque systems (International Labour Organization).

New evidence from the OECD employer survey similarly notes rapid diffusion of algorithmic management and highlights worker risks        (privacy, fairness, work intensity), even when employers frame deployment as efficiency and productivity gains (OECD, 2025). (OECD)European Parliament research on AI / algorithmic management in workplaces also anticipates rinsing exposure ans stresses that monitoring and automated decision systems extend well beyond platform work into the “ regular” sector  (European Parliament, 2025). (Parlament Europeu)

Gender relevance: surveillance pressures  often hit  hardest in the same feminised, lower-bargaining-power roles identified in section 1. where targets can be intensified and contestation is weakest. 


7.3. Replication historical biases while making them harder to contest 

Ai bias  is not only about “ bad data” it is also about institutionalising discrimination through opacity . When automated tools shape hiring evaluation, or discipline, biased  outputs can appear  objective and therefore become harder to challenge. 

A well- documented illustration is facial recognition where NIST’s demographic testing shows  that many algorithms exhibit demographic differential, with performance varying by sex and race depending on systems and data conditions(Grother et al., 2019). (NIST)

Even if an employer’s intention is “ security “  or efficiency “ differential error rates can produce unequal burdens ( more false matches, more flagging, more scrutiny), especially when combined with monitoring regimes (ILO, 2025a; European Parliament, 2025). (International Labour Organization)


7.4. Consolidation technological power and setting the terms of “opportunity“.

AI opportunity is also shaped by market structure .If compute, frontier models, proprietary, data and cloud infrastructure are concentrated, the the ability to define standards, extracts rents, and dictate workplace deployment patterns is concentrated too. Stanford HAI ‘s AI index documents the scale and acceleration of private investment  and the industrial dynamics of frontier AI providing a data- driven basis for the claim that AI is increasingly shaped by a small number of large actors with outsized resources (Stanford HAI, 2025). (hai.stanford.edu)

This matters for gender equality because it shifts governance from democratic deliberation to corporate strategy, reinforcing the “ power to define opportunities “  problems raised in  your introduction 


7.5 Exacerbating precarity in feminised sectors via “ efficiency  logics “. 

Finally , AI amplifies precarity where work is already precarious, feminised services, outsourced back-office function and fragmented employment relationships, In these contexts , AI is often deployed less as empowerment and more as cost control, standardising, output, accelerating pace, and weakening worker voice (ILO, 2025a; OECD, 2025). (International Labour Organization)

Taken together, these mechanisms support your core thesis. AI is not  neutral nor inevitable. It is a socio-technical system embedded in labour markets, governance regimes, and corporate power. The same technology can either ( a ) widen inequality through  task downgrading, surveillance, biased automation and concentration. or (b) be redirected towards augmentation, job upgrading , transparency and social dialogue(ILO, 2024, 2025a). (International Labour Organization)


8.- RECOMMENDATIONS TOWARDS FEMINIST AI GOVERNANCE. 

If AI is currently functioning as a mirror and amplifier of inequality ( section 7 ), the governance becomes the decisive variable. Outcomes are not technologically predetermined, they depend on who designs, regulates, deploys, and contests AI systems, A feminist approach to AI governance does not imply a narrow focus on women alone, but a structural commitment to power redistribution, intersectionality  and social justice in the organisation of technological change.


Building on the evidence presented in sections 2-7 , this section outlines a coherent policy framework for feminist AI governance in the world  of work, aligned with ILO standards, UN gender equality commitments and emerging international debates in responsible AI. 


8.1. Mandatory algorithmic auditing with intersectional analysis 

Algorithmic systems used in employment , recruitment, task allocation, performance evaluation, promotion  and dismissal , should be subject  to mandatory, independent audits before and during deployment. These audits must go beyond generic “ bias  testing “ and adopt an intersectional framework, examining differential impacts by gender, race, age, disability migration status, and contract type. 

Research and regulatory experience show that bias often emerges at the intersection of characteristics, and the aggregate accuracy metrics can mask systematic harm to specific groups. Without legally enforceable auditing obligations, discrimination remains opaque and effectively unchallengeable. Feminist AI governance therefore requires : 


  • Ex.ante impact assessments

  • Access to meaningful explanations for affected workers. 

  • and mechanism the opacity and institutionalisation of bias analysed in section 3. 


8.2. Public investment in digital skills and care policies enabling participation

Skills policy alone is insufficient. As demonstrated in section 4 and 5, women’s exclusion from AI .related opportunities is not only a question of training, but of time, resources and social infrastructure. 

Governments should pursue integrated investment strategies 


Governments should pursue integrated investment strategies that combine:

  • publicly funded, modular digital and AI-related training,

  • paid training time and income support during transitions,

  • and expanded care infrastructure (childcare, eldercare, disability care).

From a feminist economic perspective, care systems are not “social spending add-ons” but productive infrastructure. Without redistributing unpaid care work, reskilling policies risk benefiting primarily those—still disproportionately men—who already have discretionary time and institutional support.

8.3. Support for women-led technological ecosystems, especially in the Global South 

To counter the concentration of AI power documented in section 7, feminist governance must move beyond inclusion within existing corporate structures and actively support alternative innovation ecosystems.

This  includes : 

  • targeted public finance and blended finance instruments for women-led tech enterprises.

  • support for local, context-sensitive AI applications ( health, education, agriculture , public services) 

  • investment in open, multilingual and non-WEIRD datasets.

  • and South-South cooperation on digital capacity-building.

 In the Global South, where women face intersecting constraints of connectivity, finance and social norms, such ecosystem-level support is essential to prevent AI from reinforcing global and gendered hierarchies of technological dependency. 



8.4. Strong regulation of AI in hiring monitoring and worker management

Given the risks identified in sections 2  and 3, AI systems used in employment decision-making and workplace surveillance require particularly stringent regulation. 

Feminist AI governance aligns with the precautionary principle and labour rights traditions by: 

  • prohibiting or strictly limiting high -risk practises ( e.g emotion recognition, biometric surveillance at work)

  • requiring human  oversight for all consequential employment decisions.

  • and ensuring compliance with existing non-discrimination, privacy and labour law. 


This is not a call to “ slow innovation” , but to shape innovation so that efficient gains do not come at the cost of dignity, autonomy, and equality. 


8.5 Inclusion of feminist and labour organisations in AI governance bodies. 

AI governance is too often  dominated by technical experts, corporations, and executive branches. Feminist governance insists on democratising decisión-making.

The requires the institutionalised participation of : 

  • Trade unions and workers’ representatives. 

  • Feminist organisations and gender equality bodies. 

  • Civil society actors representing marginalised groups. 


Such participation should extend beyond consultation to real influence over standards-setting, regulatory design and evaluation. This responds directly to the governance gaps identified in section 3.3. and recenters AI as a matter of collective choice, not technocratic inevitability.


8.6 Just transition policies for feminised occupations at high risk of automation

As shown in section 2, women are disproportionately concentrated in occupations highly exposed to generative AI. Feminist AI.

  • early identification of at-risk occupations,

  • social dialogue on task reorganisation,

  • job redesign oriented toward augmentation rather than substitution,

  • wage protection, redeployment pathways, and income security.

Without such measures, AI driven restructuring risks reproducing historical patterns in which women absorb the costs of economic transformation while benefits accrue elsewhere. 

8.7 Redistribution of care as a prerequisite for digital inclusion.

Finally, feminist AI governance must confront a foundational issue often left implicit : digital inclusion is inseparable from the political economy of care.  

As long as women disproportionately shoulder unpaid and underpaid care work, their ability to engage with AI, whether through training, entrepreneurship, or career progression, will remain structurally constrained. Redistribution of care responsibilities across households, markets, and the state is therefore not peripheral but constitutive of technological justice. 

Taken together, these recommendations operationalise the article’s core claim: AI will not deliver gender equality by default. Without feminist governance, it will scale operationalise the article¡s core claim ; Ai will not deliver gender equality by default. Without feminist governance, it will tend to scale existing inequalities, with it AI can be redirected toward jobs quality, inclusion and social justice.

The question, therefore, is not whether AI is a threat or an opportunity, but who governs it, for whom, and under what social  conditions. 


9. CONCLUSIONS. 

Artificial intelligence holds genuine potential to advance gender equality in the world of work, but the analysis developed throughout this article demonstrates that such outcomes are neither automatic nor technologically guaranteed. AI is not an external force acting upon labour markets; it is a socio-technical system embedded in pre-existing structures of power, inequality, and institutional design. As such, its effects largely reflect—and often amplify—the conditions into which it is introduced.

Drawing on evidence from the ILO, UNESCO, OECD and critical scholarship, this article has shown that women are disproportionately exposed to the risks associated with AI-driven transformation: higher exposure to automation in feminised occupations, deteriorating job quality through algorithmic management, systematic bias embedded in opaque decision-making systems, exclusion from STEM and AI development, and constrained access to digital infrastructure—particularly in the Global South. These dynamics are not isolated failures, but interconnected mechanisms through which technological change reproduces structural inequality. ✱

At the same time, AI can expand women’s economic agency through productivity gains, new forms of entrepreneurship, and the emergence of AI-adjacent roles. However, these opportunities materialise only under specific social and institutional conditions. Without deliberate intervention, they tend to accrue to those who already possess time, skills, capital, and voice—reinforcing rather than reducing gender gaps.The central conclusion is therefore political rather than technical. Advancing gender equality in the age of AI requires redistributing power, not merely expanding access. This entails adopting intersectional policies that recognise differentiated impacts, embedding ethical and rights-based governance into AI systems, and ensuring women’s meaningful participation at every stage of the AI lifecycle—from design and data collection to deployment, regulation, and evaluation. It also requires addressing foundational constraints, notably the unequal distribution of unpaid care work, without which digital inclusion remains structurally limited.

Absent such measures, AI is likely to accelerate existing inequalities, making them faster, less visible, and harder to contest. With feminist and democratic governance, however, AI can be redirected toward job quality, social justice, and inclusive growth. The future of work will therefore not be determined by technology itself, but by the collective political choices made today about how that technology is governed, for whom it is designed, and whose interests it ultimately serves.

By curiosity 

Munllonch 


 ✱ REFERENCES 

Antonio, A., & Tuffley, D. (2014). The gender digital divide in developing countries. Future Internet, 6(4), 673–687. https://doi.org/10.3390/fi6040673

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability, and Transparency (FAT*), 77–91. https://doi.org/10.48550/arXiv.1801.00006

Carvajal, D., et al. (2024). Will artificial intelligence get in the way of achieving gender equality? (Working paper). Paris School of Economics. https://www.parisschoolofeconomics.eu

Cherie Blair Foundation for Women. (2025). Empowered or undermined? Women entrepreneurs and the digital economy. https://www.cherieblairfoundation.org

European Parliament. (2024). Addressing AI risks in the workplace (Briefing). European Parliamentary Research Service.

European Parliament. (2025). Digitalisation, artificial intelligence and algorithmic management in European workplaces (Study). European Parliamentary Research Service.

Gmyrek, P., Berg, J., & Bescond, D. (2023). Generative AI and jobs: A global analysis of potential effects on job quantity and quality (ILO Working Paper No. 96). International Labour Organization.

Gmyrek, P., Berg, J., Kamiński, K., Konopczyński, F., Ładna, A., Nafradi, B., Rosłaniec, K., & Troszyński, M. (2025). Generative AI and jobs: A refined global index of occupational exposure (ILO Working Paper No. 140). International Labour Organization.

Grother, P., Ngan, M., & Hanaoka, K. (2019). Face recognition vendor test (FRVT) part 3: Demographic effects (NISTIR 8280). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.IR.8280

GSMA. (2024). The mobile gender gap report 2024. GSMA Association.

International Labour Organization. (2024). Algorithmic management practices in regular workplaces are already a reality (News release).

International Labour Organization. (2024). Generative AI and jobs: Policies to manage the transition (Policy brief).

International Labour Organization. (2025). Work transformed: Promise and peril of AI (ILO brief).

International Labour Organization. (2025). WSSD research brief: AI, skills and social dialogue.

International Labour Organization. (2025, May 20). One in four jobs at risk of being transformed by GenAI, new ILO–NASK global index shows (News release).

International Telecommunication Union. (2024). Measuring digital development: Facts and figures 2024. ITU.

International Telecommunication Union. (2025). Measuring digital development: Facts and figures 2025. ITU.

OECD. (2024). Artificial intelligence and wage inequality (OECD AI Papers).

OECD. (2024). What impact has AI had on wage inequality? (Policy paper).

OECD. (2025, June). The effects of generative AI on productivity, innovation and entrepreneurship. OECD.

OECD. (2025, July 8). Unlocking productivity with generative AI: Evidence from experimental studies. OECD.

OECD. (2025). Algorithmic management in the workplace: New evidence from an OECD employer survey. OECD.

OECD. (2025, November 4). Generative AI and the SME workforce. OECD.

OECD. (2025, November 5). How is generative AI impacting SMEs’ skill and labour shortages? OECD.

Stanford Institute for Human-Centered Artificial Intelligence. (2025). Artificial intelligence index report 2025. Stanford University. https://hai.stanford.edu

UN Women. (2023). The gender digital divide must be bridged to ensure we leave no one behind. UN Women.

UN Women. (2024). Placing gender equality at the heart of the Global Digital Compact. UN Women.

UN Women. (2025, January). Advancing gender equality through partnerships for gender-responsive artificial intelligence. UN Women.

UN Women. (2025). Equal is greater: Advancing gender equality through private sector partnerships. UN Women.

UN Women. (2025). Unfinished business: Private sector and gender equality—Transforming corporate commitments into equality for all women and girls. UN Women.

UNESCO. (2024). Changing the equation: Securing STEM futures for women. UNESCO Publishing. https://unesdoc.unesco.org

United Nations. (2025, April 11). UNW/2025/2 (Executive Board document). https://docs.un.org

Verick, S. (2025). Artificial intelligence and the future of work: Job quality, power and inequality (ILO commentary/brief). International Labour Organization.

World Bank. (2025, July 24). Beyond the money: Why intentional investment in women matters. World Bank.

World Bank. (2025, October 17). Access to capital and women’s entrepreneurship (Systematic review). World Bank Open Knowledge Repository.

World Bank. (2025, November 11). More women have financial accounts, yet equal access and use remains… (Global Findex blog). World Bank Blogs.

World Economic Forum. (2025, May 13). Digital inclusion: A $5 trillion opportunity for women entrepreneurs. World Economic Forum.



Comments


Donate with PayPal

If there were ever a time to join us, it is now. You can power the entrepreneur women and help sustain our future. Support the Coachability Foundation from as little as € 1,  it only takes a minute. If you can, please consider supporting us with a regular amount each month. Thank you.

Info

nhc-footer-anbi-125x0-c-default.png

Action

Donate

Contact

Donate with PayPal
  • Instagram
  • Facebook
  • Pinterest
  • YouTube

  

       Made with creativity and compromise by  © Coachabilibity Foundation. RSIN NUMBER  861236749  KvK-nummer 78024781 Anbi Status  2021. All Rights Reserved.

bottom of page