In a recent article published in Foreign Policy titled Why Everyone Hates Think Tanks, the authors ask, “if think tank experts have such great insight into policy, why are the outcomes so terrible so much of the time?”.
There’s a simple way to begin rectifying this: put more effort into evaluation.
With better measurement and evaluation, when things don’t go according to plan, you can work out what’s gone wrong and how to fix it.
For many policy organisations, however, this feels a wild idea – and we’ve written about this in the past.
For this article, I have spoken to policy practitioners from across a wide range of domains and geographies. Most have had the same experience. Namely, it’s hard to get buy-in to invest in Measurement, Evaluation and Learning (MEL) from leadership, but when it is done, it is hugely useful to individual programmes and the organisation as a whole.
With this in mind, we want to share some thinking and insights around this to help policy organisations progress on their MEL journey. This article sets out what evaluation can help you understand, and its benefits.
Where to begin with MEL?
There are two ways of thinking about this: a) at the organisational level – is your organisation achieving its goals? – and b) at the individual policy or programme level – did that intervention create the change you intended?
Below we break this down into questions you could look to answer through evaluation.
A) First, evaluating if your organisation is actually effective at influencing policy debate and adoption. Some questions this could answer include:
- How influential are you on your focus areas?
- Do you change policy discourse and legislation? How do you know this?
- How effective are you at getting policy discussed in the national discourse and ultimately implemented by policymakers?
- How do you know that you caused this change?
- How could you do it more efficiently in future?
B) Second, are the policies you propose any good? Surely the end goal must be to encourage the implementation of policies that work, rather than policy for policy’s sake? Some questions this could answer include:
- Did your policy work as intended?
- If not, what happened?
- What influenced whether or not the policy went according to plan? Is this the same as was expected?
- What could be done differently next time?
So why don’t we evaluate?
Two common arguments against measurement and evaluation are that it is costly and difficult. While there is virtue to these points, they aren’t sufficient to disregard it altogether.
On the cost side of things, it is true that running an evaluation project will always cost more than not running one at all. But this belies the benefits.
Proper evaluation will show you whether or not you are achieving your objectives, or misdirecting your efforts. By understanding how an intervention is working, you can course correct and increase your impact rather than continuing to do things that don’t work, simply because that’s the way things are done.
Evaluation is mostly difficult in that it takes a different way of thinking about what policy organisations do. Once the processes and tools used in evaluation – such as the theory of change and evaluation framework, which we’ll cover in an upcoming article – are part of your toolkit, you’ll wonder how you ever tried to change anything without them.
The benefits outweigh the costs
If you can show that you have impact, you can attract more funding and/ or support from decision-makers for your work. In an increasingly results-driven world, if you cannot demonstrate these results, then you will struggle to compete with those who can.
At the heart of it, the achievement of more and better impact is what all policy organisations aim to do, whether it be in a niche area of interest, or addressing much larger societal or global problems.
In the second part of this series, we will walk through what an evaluation project looks like in practice, working through the four key steps.See the 4 steps