DECONSTRUCTING EVALUATIONS
DECONSTRUCTING EVALUATIONS
Let's start with evaluations. Now, evaluations can cover many different aspects of a project, right from its initial design through to lessons learned. And in the video you can see some of the typical sections that you might see in an evaluation.
Obviously, your evaluation itself depends on the specific terms of reference that your donor has given you. But we can see things such as project preparation and design, its relevance, efficiency, effectiveness, outcome and impact, those higher level results. And looking ahead, quality, sustainability, replicability, findings and recommendations, and lessons learned, things that can be used in the future to make other projects run more effectively.
So, a good approach to evaluations is, firstly, to understand the terms of reference. What are the evaluation requirements? Are you looking at strengths and weaknesses? Are you looking at quality, sustainability, effectiveness, efficiency?
And map that out. Make it visual. And for each section, we want to develop our questions, our key evaluation questions.
If we have the questions, then we need to decide where are we going to find the answers to those questions? Who has that information? And what method shall we use to collect that? Is it secondary data, questionnaire, interview, focus group discussion, observation, or case study?
And finally, we then can design our data gathering tools, our questionnaires, our focus group discussion questions, and so on.
So, as I've said, your particular evaluation may have different sections or your donors may have different requirements, but commonly we'll see these things:
- Project Preparation and Design
- Relevance
- Efficiency
- Effectiveness
- Outcome and Impact
- Major Findings and Recommendations
- Quality, Sustainability and Replicability
- Lessons Learned
When we look at Project Preparation and Design, we're looking at everything that was done before the project started in terms of the activities that were carried out. Who was consulted? Who was researched? Were those activities relevant? And did they actually appear in the final project itself? Did we miss anything? And the initial project plan, was it relevant? Was it appropriate to the target group's perceived needs and to the development problem identified?
Project Relevance is dynamic. It changes over time. So a project that may have been relevant at its initial stage does need to adapt itself over time to continue to meet the needs of the target group. And the section on relevance will be asking those questions. It will be going to key informants, both donors and within the community, and finding out if that project continued to be relevant throughout its lifespan.
Efficiency looks at our use of resources. It's about timeliness, delivering services on time and to cost. Did we do things right? Were the activities conducted smoothly and efficiently? Was there a more economical way to get the same results? Did things happen as scheduled? And was the schedule appropriate? Did things happen at the right time? And to what extent did the management structure support efficient delivery of services? And did we use the right methods, the right methodology for our target group and to address that particular development problem?
Where efficiency looks at whether we did things right, Effectiveness looks at whether we did the right things. Our initial assumptions, were they correct? And if not, how did those affect the project in terms of achieving its intended results? And how did we respond? How did we respond to changes over time? And should the assumptions of the project be reviewed and changed? This is particularly important during a review when the project still has a significant time to continue. And to what extent was that project strategy effective in achieving the outcome? The outcome being what the project has promised to deliver by its end.
Let's just pause a moment and consider those two concepts side by side. Some people get them a little bit confused, but the difference is very, very clear, that efficiency is all about how quickly and how economically we get something done. Efficiency tells us that getting the job done quickly and to cost is more important than getting it done perfectly at a higher cost later. Efficiency very much looks at the value for money, how much inputs were used in terms of result. Effectiveness, on the other hand, is much more concerned with higher level results and the quality of those results. When we look at effectiveness, we're saying that getting something done perfectly or as well as possible is much more important than its actual cost. And there is very often a trade-off between efficiency and effectiveness. We want to provide value for money, but we also want to provide effective development results.
Another part of your evaluation may discuss the Outcome and Impact, the higher level change, the outcome being what your project has promised to deliver, and the impact, which is the shared development goal, shared with governments and other development organizations. And we need to look at what were the actual outcomes, what actual change happened for the end users, for your target group or your primary stakeholders.
And these can be both intended outcomes and unintended, things that we planned and things that perhaps surprised us. And we also need to consider the positive and negative impacts. Again, they can be planned, unplanned, positive or negative.
Let's look at those in a little more detail before we move on. First, how are they different, outcome and impact? The outcome is your agreed delivery. It's the change that your project promises to deliver by its end. And these are obviously results that we want, and they are results that we expect.
At the impact level, measurement becomes more difficult, and attribution is more difficult. We can't say that a particular impact was achieved solely because of our project. Impact is a shared goal, a shared development goal, between your project and other projects working with the same community or in the same geographical area or the same sector.
To illustrate this further, we have intended, unintended, positive and negative change. Now, positive unintended impact often represents surprises and initiatives, things that went well, but we didn't actually plan for them or expect them. And it's important to learn from these, so that those things can be replicated and built on in the future. To give one example, an NGO that provided latrines to a village community as part of a health project actually found that the community were using those outdoor latrines to keep their livestock, their goats and their chickens safe. That represents a very interesting learning from that community about what they saw as important.
Sometimes we may see a negative impact, which reflects poorly planned or poorly monitored projects. One NGO that wanted to provide water pumps in a village so that the girls and women did not have to go down to the river every day to collect water was very successful in terms of its stated outcome. But by supplying those pumps, what happened was that they took away a forum where the women would gather together each day at the riverside and they would support each other. They would share information, they would seek advice and seek support. So they inadvertently had a negative impact by affecting an existing social structure.
The Findings and Recommendations of any report is really where the action takes place. It talks about the successes, the failures, what challenges were met, how were they addressed or how can they be addressed. And recommendations tell us what actions or changes are needed to keep that project moving forward towards its outcome.
And Lessons Learned tries to draw some kind of learning from challenges overcome and changes made. For example, a project may have addressed microfinance, working with women but found that their husbands were not supportive and realized that it had to involve men also in the project in order to stay on track. The lesson learned from this was that when projects are focused on women, men also have to be included as stakeholders.
And Quality, Sustainability and Replicability look at to what extent can we reproduce or replicate what we have learned here in this particular project. What will happen, the exit strategy? What will happen after project funding has stopped? Will that positive change be sustained? And what are the quality assurance measures? How are we going to ensure that the quality of the change is also sustained after the project's end?
When we deconstruct this evaluation, what we have now is a lot of questions. And for each of those questions, we then need to identify who has the answer, how to get that information and start to gather that into our evaluation.
So the steps then are to understand the terms of reference or the evaluation requirements. If there's a template, map that out. Develop your key evaluation questions or KEQ. Identify who has the data, select the methods for gathering that and then design those data collection tools.
Here's one example of an evaluation which I myself conducted. It was for the Regional Health Initiative for Youth in Asia. It was a five-year project, the work of seven NGOs. And these were the specific terms of reference for this. It was to be an assessment and critical review. And the assessment was to look at the relevancy, the effectiveness, the efficiency and the sustainability. And the critical review was to look at the achievements, any issues, any gaps and what would be the next steps afterwards.
We actually used the map you can see in the video to:
- Build all of our evaluation questions
- Identify who had that information
- Gather our information into the map itself
- Structure for the report
- Make the presentation back to donors
So the mapping can guide right throughout the process once we've deconstructed and understood what the report requires, its objectives and the action that it will lead to.