Step 5: Monitoring progress and evaluating a Gender Equality Plan
Monitoring and evaluation are important parts of the change process. As you know by now, a gender equality plan (GEP) will typically address several issues at once, leading to a complex set of measures. Nonetheless, effective monitoring and evaluation tools enable you to measure progress towards achieving the objectives, and provide an opportunity to learn and find out what needs to be improved. If objectives are oriented towards relevant progress, success or outreach indicators, it becomes visible whether the organisation is actually changing.
This might also increase the commitment of stakeholders to those objectives, and the accountability of those who implement the GEP. Having an appropriate monitoring and evaluation plan in place can support the effective implementation of measures, ensure accountability, and enhance your knowledge and understanding of ongoing changes. This way, you also know whether adjustments to your GEP are needed.
Besides these logical arguments for considering monitoring and evaluation from the very beginning, it is also a GEP requirement in Horizon Europe. You need to be aware that, in order to be eligible for Horizon Europe, ‘it is mandatory that organisations collect and publish disaggregated data on the sex and/or gender of personnel (and students, where relevant) and carry out annual reporting based on indicators’ (see Horizon Europe Guidance on Gender Equality Plans, pp. 23–27).
More explicitly, it is specified that ‘research funding organisations will need to examine their application evaluation procedures and consider the organisation’s broader programming and decision-making processes in terms of the outcomes and impact of funding decisions and associated policy frameworks that impact on gender equality in R & I [research and innovation]’ (see Horizon Europe Guidance on Gender Equality Plans, p. 14).
While this step comes only after planning and implementing your GEP, as laid out in the step-by-step guide (because that is when you start monitoring the effects of your measures), you need to know that the monitoring and evaluation strategy needs to be set out beforehand.
Ideally, you considered which areas you want to focus on in step 2 when analysing and assessing the status quo in your organisation. In step 3, you then identified specific, measurable, attainable, realistic and time-related (SMART) targets and measures addressing these areas. In order to develop a monitoring and evaluation strategy, use the status quo assessment as a starting point. The results of this assessment will establish the baseline, which will allow you to monitor and evaluate your progress.
In order to develop an effective monitoring and evaluation strategy, you need to differentiate between monitoring targets and evaluation targets. In order to understand the difference, consider the following definitions, used by the gender equality monitoring tool (pp. 3–8) of the EU-project ‘Taking a reflexive approach to gender equality for institutional transformation’ (TARGET).
The tool defines monitoring as a continuous process, in which data is systematically collected in order to provide management and key stakeholders with regular updates on the progress and achievement of objectives and the use of allocated funds.
Evaluation, on the other hand, relates to a systematic and objective assessment of an ongoing or completed project, programme or policy based on the monitoring data, providing lessons learnt for the planning of future measures.
In other words, ‘Monitoring ensures that the right thing is done, while evaluation ensures that the right outcomes are achieved’ (TARGET gender equality monitoring tool, p. 3), which means that the two go hand in hand. But as you can see from the definitions above, their specific targets differ: the monitoring targets focus on the specific outputs and related processes (implementation level), while the evaluation targets relate directly to the targets set out in your GEP (i.e. to the impact or outcome you wish to achieve) (strategic level). This is also why the respective time frames differ: while an evaluation is a more in-depth analysis, usually conducted at the end of your GEP or funding cycle (of course, interim evaluations are possible), your monitoring targets will be assessed at much shorter intervals, to inform you about the progress. Remember that Horizon Europe requires annual monitoring reports.
The factors that need to be considered in order to identify monitoring targets also generally apply to the evaluation level. However, know that achieving the desired outputs (as shown by the monitoring) does not necessarily lead to an achievement of the desired outcome or impact (as shown by the evaluation). This might be the case, for instance, if the measured output to increase the number of reviewers who participated in gender training was reached to a certain percentage, but the training was not effective enough to ensure the desired outcome of increased gender competence among reviewers.
The monitoring and evaluation indicators that you will identify are concrete variables that you can measure in order to assess if a monitoring or evaluation target was reached. For more information on how the different dimensions relate to each other, consult the gender equality monitoring tool directly.
Now that you know about the basics of monitoring and evaluation (and the difference between the two), you can think about a concrete strategy. Note that it is always helpful to involve people who have experience with monitoring and evaluation in this process. You can also check how it was performed in other organisations, by looking at the examples in the gender equality in academia and research (GEAR) action toolbox.
When coming up with a strategy, use the logic model you identified in step 3 (embedded in a theory of change): the impact pathway of the measures implemented will help you identify what you want to monitor and evaluate. As for the details of your strategy, you need to differentiate between monitoring and evaluation. Consider the following steps in order to come up with a monitoring strategy.
- Identify concrete output indicators. For this purpose, take a look at your GEP and your status quo assessment and identify output indicators for each of your measures. A list of potential indicators is provided below. Consider that the collection of the relevant data needs to be feasible with the resources available to you.
- Select appropriate data collection instruments. Know that, in general, these will be the same instruments used in step 2 for your status quo assessment. Some data might be available on a regular basis from the human resources department, other data you might have to collect yourself. For this purpose, you can conduct an annual survey of staff to monitor change, for instance. The gender equality audit and monitoring (GEAM) tool provides ready-to-use surveys (implemented via LimeSurvey) for this purpose.
- Come up with a time frame. Your monitoring should take place annually, including annual monitoring reports published on the organisation website to meet the Horizon Europe requirements.
- Plan regular monitoring sessions. Involve the core and/or extended team responsible for your GEP. These meetings can be crucial for reflecting on the progress by looking at the monitoring data and exchanging experiences. This way, you will be able to react to potential issues and steer your measures in the desired direction.
As for creating your evaluation strategy, the general process will be similar, but some things need to be considered in more depth.
- Think about the context. When planning the evaluation of your GEP, you need to consider the context of your organisation. Relevant context factors were discussed in step 1. As the evaluation is more extensive than your annual monitoring, you need to consider, in particular, the time and (human and financial) resources available to you. These will also depend on the type and size of your research funding body.
- Identify additional impact indicators. While you will also consider output indicators in your evaluation, it will also focus specifically on the impact of your implemented measures. Make sure to include both quantitative and qualitative indicators, as some measures cannot be properly assessed only by looking at quantitative figures (e.g. number of cases of sexual harassment reported to / dealt with internally and in funded projects, increased gender awareness of researchers and staff, inclusive knowledge production). Additional criteria on the gendered impact could be on the gender proportion of first authors of research reports or publications.
- Use additional (qualitative) data collection instruments. In your final evaluation, you may want to add additional data collection instruments, such as individual interviews, focus groups, participatory workshops, document analyses or participant observation. These qualitative techniques allow you to gain a deeper understanding of the impact of your measures.
- Take your monitoring results into account. You will carry out your evaluation at the end of your GEP cycle. In your final data analysis, also include the results of your monitoring process.
Keep in mind that monitoring and evaluation should not be a huge burden for you; they are a chance to learn, and therefore time needs to be invested to ensure the success of your measures. When drawing upon external expertise to carry out your monitoring and evaluation process, we recommend bringing together these external evaluators and the people in charge of implementing change within the organisation to ensure that instruments are adapted to your goals and constraints.
For additional resources on how to plan your monitoring and evaluation process, take a look at the tab ‘Tools and resources’.
A list of quantitative indicators (as suggested by the Horizon Europe Guidance on Gender Equality Plans) was provided in step 2, separated by internal and external stakeholders. The same indicators should be considered for continuously monitoring progress in your organisation.
Of course, the list is not exhaustive and indicators should be selected according to your specific targets and objectives, and in the context of your planned measures. An illustration of how to identify indicators regarding (1) gender in decision-making and (2) integrating gender in R & I content has been specified for research bodies by the European Commission project TARGET.
Possible context and implementation indicators for gender in decision-making for internal stakeholders are as follows:
- shares of women and men members of decision-making bodies;
- shares of women and men members of decision-making bodies who have participated in specific gender training and capacity building;
- number of gender training and capacity-building courses for members of decision-making bodies;
- self-assessment of increase in gender competence (e.g. through feedback surveys after training courses);
- share of women among newly appointed members of decision-making bodies.
Possible context and implementation indicators for gender in decision-making for external stakeholders are as follows.
- share of women in evaluation panels in relation to men;
- shares of women and men evaluators who have participated in specific gender training and capacity-building courses;
- number of gender training and capacity-building courses for evaluation panel members;
- share of members of evaluation panels with gender competence (e.g. men and women who have participated in gender training);
- share of women among newly appointed evaluation panel members.
Possible context and implementation indicators for the integration of the gender dimension into R & I content are as follows.
- description of calls in relation to the integration of gender (calls with focus on gender, calls that include integrated gender analysis as an aspect in research, calls that do not explicitly address the sex/gender dimension);
- description of (lack of) gender expertise in evaluation panels;
- gender composition of research teams (share of women in research teams);
- number of funded projects with a gender focus in relation to all funded projects (share of gender projects);
- numbers of women and men participants (applicants, reviewers) in awareness-raising activities or training on gender in R & I.
Based on the targets set out in your GEP, specific indicators need to be developed to establish a baseline and monitor progress. Such indicators help to build accountability for the successes or failures of implemented measures. The ‘Evaluation framework for promoting gender equality in research and innovation’ (EFFORTI) toolbox can support you in identifying quantitative (and qualitative) indicators to measure the output, outcome and impact of your measures.
You may also want to consider breaking down the data even more and looking into additional dimensions besides gender: looking at intersectionality can include individual or group features, such as a migrant or minority background, disabilities, low socioeconomic status or at risk of poverty, sexual orientation, and so on. Pay special attention to data protection issues if you plan on breaking down the data into small groups. Consider also any national regulations on collecting personal data.
Look at the tab ‘Tools and resources’ for examples of quantitative indicators identified by other organisations and resources provided by structural change projects.
Qualitative indicators are especially relevant to see whether your desired outcomes were reached. However, qualitative indicators can also give additional information on your ongoing progress and help you understand the dynamics of change (or lack thereof). Qualitative indicators may look at dimensions such as the following.
- Mainstreaming of gender knowledge. This can be measured, for instance, by the relevance given to knowledge creation on gender equality within the organisation, the institutionalisation of gender equality (in the form of dedicated programmes or departments), the dissemination of gender equality knowledge across disciplines, and so on.
- Awareness among different categories of staff and external stakeholders (reviewers, board/panel members, applicants). This can be measured by the attention given to gender equality by different categories of stakeholders through communication initiatives, codes of conduct and activities centred on gender-related aspects.
- The uptake of gender equality objectives set in your GEP. This can be monitored by observing the participation in and acceptance of your implemented measures and the (human and financial) resources allocated to support these measures.
- The actual transformation towards greater gender sensitivity. This should focus on the effects on both formal and informal practices due to the implemented measures. It may, for instance, be shown by increased attention being given to women’s ideas and perspectives in decision-making mechanisms that are dominated by men, or by implementing evaluation criteria in a gender-sensitive manner.
- The diffusion of a gender equality culture. This can be measured in terms of changing working conditions, but also verbal and non-verbal interactions and decision mechanisms (seating arrangements in panels). It could be reflected in changes regarding the management of work–life balance, awareness of sexual harassment and other aspects of gender-based violence, non-sexist communication, and so on.
Know that qualitative indicators have a huge learning potential. They support self-reflexivity and may provide useful indications for a continuous enhancement of the implemented measures. They may also provide evidence that change happens and that gender equality and awareness are not out of reach. Techniques to collect qualitative data include individual interviews, focus groups, participatory workshops, document analyses and participant observation.
To help with the implementation of your monitoring and evaluation instruments, EU-funded structural change projects have developed a number of useful templates and ready-to-use resources. Switch to the tab ‘Tools and resources’ to view the full list. The ‘Gender equality in engineering through communication and commitment’ (GEECCO) data monitoring tool, for instance, provides an Excel template that includes indicators for a variety of targets, detailed definitions, formulas to calculate shares and other useful options. You may also want to take a look at the GEAR action toolbox for best-practice examples of monitoring and evaluation in other organisations.
Once you have collected and analysed the data, you can check if there have been any (significant) changes since your initial status quo assessment (baseline). You should also assess whether the monitoring and evaluation targets have been met. Discuss the results with your team and draw conclusions on what they mean for your GEP. During your monitoring, you might want to check whether you need to adjust some of your targets or the way that measures are implemented. During the final evaluation, you need to ask yourself what you can learn for the next GEP cycle.
Finally, you need to communicate your results.
- Provide annual monitoring reports as well as a final evaluation report. These reports should be published on the organisation website. The number of evaluation periods and reports depends on your individual strategy.
- Regularly update leadership/management about the results. This will be done through your annual monitoring and final reports. However, you might also want to involve them in meetings or update them more regularly, depending on your organisational structure.
- Inform other stakeholders within your organisation. It is not just the leadership that is interested in the progress of change in your organisation. Make sure to communicate the results to all relevant stakeholders. You might also want to keep them engaged by organising a meeting to present and discuss the results of your analysis (e.g. after the final evaluation of your GEP). Note that the monitoring and evaluation process is also an effective way to keep stakeholders (including leadership/management) engaged and to ensure their ongoing support for your measures. It also paves the way for the future by helping you design even more resolute measures for your next GEP.
- Consider external stakeholders. Your communication can also target policy stakeholders at regional or national level, professional associations or other institutional partners of your organisation.
When communicating the impact of your measures, know that there might be other positive side effects (or added value) of your implemented measures: the entire process may lead to a strengthened sense of community; more transparent recruitment, appraisal and evaluation procedures; stronger pluri-disciplinarity in research; and improved working conditions in general. All of this may be uncovered by your monitoring and evaluation process. In particular, your final evaluation will show the positive dynamics brought about by gender mainstreaming strategies and their inherent opportunities. Making these positive side effects visible can help strengthen your position and build the foundation for the next GEP cycle.
In order to view videos and webinars or further tools and resources on the topics discussed in step 5, switch between the respective tabs. Otherwise, click below to continue to the next step and read about how to ensure the sustainability of your measures. You can also go back to the previous step.
- The EU-funded project SPEAR prepared video presentations to help practitioners understand the steps involved in the process of implementing a GEP. The videos are based on the steps provided in this GEAR step-by-step guide. Note that there are also tasks for you to perform at the end of some of the videos, to check your understanding of the topics.
GEAR tool – steps 5 and 6
- The ‘Promoting gender balance and inclusion in research, innovation and training’ (PLOTINA) monitoring tool provides a useful tutorial on how to use the tool.
- The ACT GEAM tool also provides useful training videos.
The following are general tools and resources to be used by all research organisations. Funding bodies may find them particularly helpful for addressing internal stakeholders and processes. Specific tools and resources for funding bodies can be found in the action toolbox in Section 4.3.10.
Guidelines for creating a monitoring and evaluation strategy
- To get an idea about indicators for research bodies that are relevant to decision-making and the integration of gender in R & I, have a look at the EU-funded project TARGET.
- To assess structural change, the EU-funded project ‘Transforming organisational culture for gender equality in research and innovation’ (GENOVATE) has developed comprehensive guidelines for evaluating gender equality action plans. These guidelines have several specific features:
- they fully take inputs from the evaluation literature into account, carefully identifying the steps of the evaluation process and defining a theory of change adapted to the project’s purpose;
- they also draw upon the insights from the critical analysis of gender mainstreaming implementation in a number of domains, thus highlighting the specific hindrances and resistance faced by social and organisational change aiming to achieve gender equality;
- furthermore, they focus on three fundamental dimensions of change: ideas, structures and people.
- Another very comprehensive guide was written in the course of the EU-funded project GenderTime, entitled A model for building a gender equality index for academic institutions. This 2016 guide addresses the problem of measuring gender equality in academia. It starts by defining the problems and arguing for the importance of appropriate monitoring and evaluation, then provides detailed definitions and, finally, introduces different approaches. It also describes in detail how to build a system of indicators.
- The EFFORTI toolbox provides a framework for ‘a wide range of stakeholders – ministries, funding agencies, programme owners, equality officers, etc. – to conduct a sound and comprehensive evaluation of gender equality, but also research and innovation outputs, outcomes and impacts of gender equality measures’.
- The ‘Gender equality in information science and technology’ (EQUAL-IST) project published a report based on the experiences of monitoring and evaluating GEPs in seven research-performing organisations. The report presents the assessment methodology and indicators used in the monitoring process and provides a monitoring template plan.
Examples of useful monitoring and evaluation indicators
- The gender equality monitoring tool of the EU-funded project TARGET provides multiple examples of how to define indicators for different target areas. It builds on a logic model, showing the pathway from the input and set activities to the different outputs, outcomes and general impact. For each dimension in this model, example targets and indicators are provided.
- The EU-funded project PLOTINA created a monitoring tool based on 10 core indicators and 40 specific indicators, which can be selected based on the focus of your GEP. Check out the full list of indicators.
- The EU-funded project ‘Female empowerment in science and technology academia’ (FESTA) provided a thorough guide on quantitative indicators and methodology in its FESTA toolkit.
- The ‘Gender equality network in the European research area’ (GENERA) planning–action–monitoring (PAM) tool can help you find measures, indicators and targets for GEPs in the field of physics. You can choose to click through the online tool or download the entire PAM tool as a PDF document.
Ready-to-use monitoring and evaluation tools
- The PLOTINA monitoring tool is an online tool that can help you measure and visualise your progress over different periods. It consists of a virtual survey and a visualisation tool, presenting your data and an overall indicator. Watch the tutorial video in the tab ‘Videos and webinars’ to get started.
- See also the GEAM survey by the EU-funded project ACT. The survey is readily available in multiple languages and is already programmed via LimeSurvey.
- The EU-funded project GEECCO developed various evaluation and monitoring materials, including an Excel template and a PowerPoint tutorial for collecting and analysing sex-disaggregated data in research-performing organisations. The three core areas covered are (1) decision-making processes and bodies, (2) recruitment and career development of women researchers and staff and (3) the sex/gender dimension in research and teaching content.