Marking International Women’s Day and the launch of the EU Gender Equality Strategy 2026–2030, the Cyprus Presidency convened a discussion with Commissioner for Equality, Hadja Lahbib and EIGE Director Carlien Scheele and others on 5 March 2026 in the European Parliament on ensuring digital transformation strengthens – rather than undermines – gender equality. Here are Carlien’s perspective during the intervention… 


Q1: What are your initial reactions to the presentation of the new European Gender Equality Strategy, and which priority areas you think should be prioritised? 

My initial reaction to the new Gender Equality strategy is one of real encouragement! 

Staying firm and true to the objectives in the Roadmap for Women’s Rights, we have a strong framework, that is very comprehensive. And, I would like to underline the clear alignment between the Roadmap, the new Gender Equality Strategy and the Gender Equality Index, as they all focus on the same core areas that are essential for measuring and advancing equality across the EU. 

Each area included in the Strategy is important, but I will highlight the priorities that align most closely with EIGE's work, such as tackling gender-based violence, closing the gender pay and care gaps, promoting gender balance in leadership and decision making, and ensuring inclusive digital and social policies.

In short, building on the achievements of its predecessor, we have an ambitious and comprehensive strategy to keep us determined and well directed until 2030. 

On the topic of health, I would also like to squeeze in a much-deserved congratulations to the European Commission for making a historic decision for over 20 million women in Europe to have access to safe abortion. Now it’s about seeing this on the ground to translate into real rights, equality and dignity. 

Q2: Can you share an incident or case that has come to your attention which illustrates how AI technologies can be misused to promote or enable sexual abuse, and what key lessons policymakers, platforms, or users should take from it?

We are living through a moment where technology is transforming how we live, work and relate to one another – and that transformation carries both opportunity and risk.

I am very proud to say that my Agency has been supporting the Cypriot Presidency with a forthcoming report on cyber violence against girls. This report reflects our shared commitment to understanding how digital environments are shaping the experiences of younger generations – and to ensuring that policy responses keep pace with these evolving realities.

Girls participating in the cyberviolence study described experiencing or witnessing cyber violence across a wide range of digital platforms. They stressed that abuse was not isolated to a single site or app; instead, it adapted to the technical features, cultures, and norms of each platform. In other words, the type of violence experienced was often shaped by what the platform enabled - whether anonymity, image-sharing, private messaging, or real-time interaction. 

Girls also expressed concern over algorithm-driven risks, particularly the rise of AI-generated deepfakes and manipulated content that reinforced sexist and violent norms.

A recent example that has shaped many of our discussions is the way generative AI tools, including Grok, have been misused to produce sexualised or abusive content at scale.

What makes these cases so striking is not only the technology itself – it is the speed, the reach, and the normalisation of harm that can follow.

For victims, the consequences have a real-world impact: such as anxiety, reputational damage, fear, and a sense that control over one’s own image or identity has been taken away.

Crucially, girls also described a profound sense that there is no way to escape this type of violence. Many explained that AI‑enabled abuse is deliberately used against them as a form of ‘punishment’ - for example, when a girl refuses to engage in a relationship or asserts boundaries. This creates a climate of coercion, where the threat of abuse is used to control behaviour and silence resistance.

When asked about prevention efforts, during focus groups, girls expressed frustration with school-based campaigns, adult responses, and institutional mechanisms, that they perceived as disconnected from their digital realities. Many described adults - including parents and teachers - as lacking awareness, sensitivity, and training to respond effectively to cyber violence. 

Across countries, most girls agreed that prevention strategies are limited, outdated, or poorly implemented. School-based initiatives were often described as superficial, repetitive, and disconnected from young people’s digital realities. Brief lectures, repeated campaigns, and school assemblies delivered by the same individuals were seen as largely ineffective. 

Policymakers need to anticipate misuse in the regulation in advance, not only react to it. And they need to speak with young people in order to understand their digital realities and base their solutions on the lived experience of young people.

For platforms and developers: if a risk is foreseeable, it is preventable – and therefore a responsibility.

And for society more broadly: we must treat online abuse with the same seriousness as offline violence, because the impact on lives is no less profound.

Q3: Technology and AI are advancing rapidly and have become integral to everyday life. At the same time, these developments can give rise to new forms of online harm. While attention has traditionally focused on the non-consensual sharing of intimate content, emerging challenges now include the misuse of AI to generate sexual images without consent. What measures are you currently implementing to combat this phenomenon? What is the responsibility of companies developing the AI tools in allowing these tools for such a use?

For many girls, cyber violence is not an occasional threat but a persistent feature of their daily lives, shaping how they communicate and engage online. Girls described constant exposure to harmful behaviours that make digital spaces feel unpredictable and unsafe. Digital and physical realms are increasingly intertwined, blurring boundaries and intensifying harm.

One of the clearest emerging risks is the use of AI to create non-consensual sexual content. This is not a niche issue. It is a profound violation of dignity, privacy and autonomy.

And this is something the Strategy explicitly references. It underlines that these harms push women and girls out of digital spaces, undermining their participation in public debate, democracy and the digital economy.

In response, our work is focusing on three fronts.

First, evidence – understanding how these forms of abuse occur, who is most affected, and how systems respond.

Second, policy guidance – ensuring that emerging forms of violence are recognised in law and strategy, not treated as grey areas.

And third, prevention – including digital literacy and consent education, because technology does not create harmful norms, but it can amplify them.

But we must also be very clear: responsibility does not lie only with users.

For more than a decade, digital platforms have been masquerading as caring about human rights, women and children. Now it has become crystal clear they don't. Tech giants need to be held accountable in Europe. 

This requires a comprehensive approach: stronger cooperation with digital platforms, clearer definitions and criminalisation, and sustained investment in research on emerging forms of violence. It also means recognising the links between online misogyny and broader threats to gender equality.

Companies developing AI tools have a decisive role. Safety cannot be an afterthought or a public-relations exercise. It must be built into design, deployment and governance.

That means risk assessments that explicitly consider gendered harms, safeguards that prevent misuse, rapid response systems when harm occurs, and real transparency with regulators and researchers.

Innovation and accountability must go hand in hand – otherwise trust in the digital environment will continue to erode.

Q4: Do you have any final remarks? 

Perhaps the biggest lesson is this: AI-enabled abuse is not just a technology story. It is a power story. It reflects whose rights are protected, whose voices are heard, and whose safety is prioritised.

If we keep that perspective, our responses will deliver on what is right and just.