Measuring Up: Learning from practice: Planning, monitoring and evaluating HIV-related advocacy
Measuring Up: Learning from practice: Planning, monitoring and evaluating HIV-related advocacy
The complexity of advocacy programs makes it challenging to design robust yet practical M&E systems, making these advocacy programs disadvantaged from the outset. The relational and intangible world of advocacy presents a range of barriers when it comes to demonstrating the contribution civil society actors make to change. When they are pursuaded to change policy, policy-makers are not in the habit of publicly crediting the role of advocates in influencing their decisions. The space for influencing the targets of our advocacy and campaigns is occupied by numerous actors and calls for action that use a range of different strategies, some of which are better coordinated than others. When a national government decides to fund community-led programmes that provide health services to sex workers for the first time, who has shaped that decision? What was the role of civil society? Which campaign, coalition or advocacy programme contributed to it?
The recently launched publication "Measuring Up: Learning from practice: Planning, monitoring and evaluating HIV-related advocacy" features the experiences of and the lessons learnt from the PITCH partnership around how best to plan, monitor and evaluate HIV-related advocacy. PITCH was the first large-scale HIV programme to invest solely in community advocacy, running across five years (2016-20) in 9 countries and globally, and worked with four marginalised and vulnerable groups to strengthen and connect community-led advocacy.
In a partnership the size and reach of PITCH, it is critical to set aside time during a programme inception phase to really delve into critical questions while designing a new M&E system and identifying appropriate advocacy M&E tools: Why do we want to use this tool? What data will it provide us with? What do we anticipate learning from this data? How regularly do we anticipate using this tool? Who will be responsible for using this tool and the data it produces?
Ultimately, we are selecting advocacy M&E tools in order to learn about the effectiveness of our advocacy strategies and to better understand, and to demonstrate, our contribution to change. This vision needs to shape the responses given to these questions.
Committing to using any advocacy M&E tool in a systematic and methodical way requires a significant investment of resources: funding, staff time, and energy. The findings of the report create evidence that supports prioritising tools where those responsible for using them have previous experience and knowledge, reducing the need for extensive capacity strengthening efforts at the beginning of a new programme. Where training or other capacity strengthening is needed, this should be provided as early as possible, and expectations regarding people’s different roles and responsibilities need to be clearly agreed at the same time to avoid confusion later in the programme.
Another key finding from this evaluation was that once an agreement is made to use a particular advocacy M&E tool as part of a programme, it is important that this tool is actually used in the way it is intended, both in terms of the frequency of its application, as well as in terms of the form of data collection, analysis and interpretation that has been agreed. Without repeated application there is no data to compare over time to demonstrate what has changed, and a programme’s contribution to that change. An advocacy M&E tool can only add value to a programme M&E system if it is used methodically and in line with what has been agreed.
At times, the same tool can be used for more than one function or purpose, as was seen in the PITCH partnership. For example, the Theory of Change approach (alongside a set of corresponding performance or results indicators) and Advocacy Asks were used to help partners both plan their advocacy strategies and activities, as well as for monitoring and reporting on the results of their advocacy work.
As with any complex programme, motivating staff to regularly apply advocacy planning and M&E tools is key.
It is critical to embed M&E tools in what we do day-to-day and actually use the data to make it all worthwhile.
Timetabling regular discussions and reflections on the tool application and data is key to make that happen. Additionally, when we use the data regularly it helps us to analyse what we are doing and to quality assure the tool application and the data at the same time. To help achieve this, it is important to clarify early on what is optional and what is mandatory in terms of the different advocacy M&E tools and components of an M&E system.
To read the entirety of the report, click on the link below.