Director Carlien Scheele participates in a high-level debate hosted by Christian Veske, Gender Equality and Equal Treatment Commissioner of Estonia, on Automated Decisions and Human Rights in Digital Europe at the Code of Equality: Building fair and trustworthy AI event in Tallinn, Estonia on 11 February 2026.


Dear colleagues,

I would like to begin by addressing a persistent myth: the idea that automated decision-making systems are neutral and objective.

How could they be? What goes into them comes out of them. And we do not live in a neutral or equal society. We live in a world still shaped by structural inequalities – inequalities that are now finding a new medium through which they can be reproduced and even amplified.

Automated systems are built on data drawn from our existing social and economic realities. Those realities include unequal access to work, education, income and care responsibilities. So when we feed this historical data into automated decision-making systems, particularly in areas such as recruitment, social services, education or access to training, we risk systematically disadvantaging women – even without any explicit intent to discriminate.

This is also about who designs these systems in the first place. Despite the digital shift creating new jobs in AI, women remain significantly underrepresented. Globally, only 26% of AI professionals are women. That imbalance matters. It shapes whose assumptions, whose life experiences, and whose perspectives are reflected in the technologies that increasingly shape our lives.

We could perhaps describe automated systems as neutral if we had already corrected the structural inequalities embedded in society. But we have not.

So what explains this mismatch of speeds? The European Union is rightly prioritising competitiveness, productivity and efficiency. No one disputes the importance of strong, resilient economies. But from a gender equality perspective, automated decision-making systems are running before we have learned to walk. In the fast pursuit of efficiency, we risk automating structural inequality.

The risks are particularly heightened when systems rely on proxy variables such as employment gaps, part-time work or career breaks; when decision-making lacks transparency or meaningful human oversight; and when systems are deployed without equality impact assessments.

And here lies the central risk from a gender equality perspective: not only that automated systems are biased, but that they make structural inequality appear objective.

This brings us to a crucial question: which groups of women are most likely to be disadvantaged by systems built around so-called “standard” life courses?

From the outset, there is a broad lack of diversity among those designing and testing AI models. When development teams are largely composed of individuals from similar social and economic backgrounds, the assumptions embedded in these systems reflect a narrow experience of working life.

Many automated systems are calibrated around the idea of a linear, uninterrupted career path. But whose lives follow that path?

  • Women with care responsibilities and career breaks.
  • Women in lower-skilled or precarious employment, who face higher automation risks and fewer reskilling opportunities.
  • Women re-entering the labour market later in life.

And gender rarely operates alone. These systemic barriers intersect with socio-economic background, migrant status, age and other factors. When systems are designed around a “standard” life course that does not reflect lived reality, women whose lives are considered “non-standard” are penalised – particularly in access to jobs, training, benefits or education pathways.

It is important to add a caveat: in principle, anyone can be affected. These systems increasingly influence decision-making with limited oversight and unclear accountability. That alone should concern us.

Another important question is whether current approaches to algorithmic “fairness” are capable of capturing cumulative and intersectional gender discrimination.

The short answer is: not yet.

From a gender perspective, current fairness approaches are often too narrow. They struggle to capture cumulative disadvantage –  the way inequalities build up over time through pay gaps, occupational segregation, unequal care responsibilities, digital exclusion and exposure to online violence.

Our research shows this clearly in hiring and performance management systems. Even when algorithms are designed with fairness in mind, they can penalise non-linear careers, part-time work or caregiving breaks –  realities that affect many women.

Fairness also cannot be separated from working conditions or online environments. Algorithmic management, telework monitoring and social media systems all affect women differently, in ways that standard fairness metrics often fail to detect. In doing so, they risk exacerbating historical inequalities.

This is why fairness cannot be solved through technical fixes alone. Gender expertise and equality bodies must be involved alongside data scientists from the very beginning –  from design to deployment and evaluation.

We must also recognise that fixing one bias does not fix them all. In fact, it can create a false sense of progress. These systems evolve rapidly. If we address only isolated symptoms, we may correct one issue while ten new ones emerge elsewhere. Without tackling the underlying structural inequalities through ongoing governance, fairness becomes reactive –  always one step behind technological development.

We must also ask: are automated systems redistributing administrative power in ways that deepen existing inequalities? And if so, who has the authority and responsibility to intervene?

Yes, they are. Gender biases and stereotypes about who holds power are now at risk of being automated and scaled.

But responsibility does not lie with technology. It lies with people.

Public authorities deploying automated systems must take responsibility for their outcomes. Developers and providers are responsible for how systems are designed and trained. Regulators and equality bodies are responsible for enforcing safeguards and preventing discrimination.

The AI Act helps clarify these roles. It is not perfect –  and its implementation is still evolving –  but it is a global landmark. There is no equivalent framework elsewhere that so clearly embeds fundamental rights, including non-discrimination, into AI governance.

This is precisely why current efforts toward “simplification,” including discussions around the Digital Omnibus, must be monitored closely from a gender equality, social justice and fundamental rights perspective.

Finally, what should states do differently tomorrow –  not only in the next regulatory cycle?

First, states must take responsibility for systems already in use. Course correction cannot wait for future technologies.

Second, they must invest in digital and AI skills, particularly for educators and public servants, with gender equality integrated from the outset.

Training must also reach those with limited digital skills –  both women and men –  using flexible formats that reflect real-life constraints.

And above all, states must ensure that protections keep pace with the speed of technological development.

Because if technology moves faster than equality, it is inequality that will scale.