Author: Hur Hassnain
The monitoring and evaluation field of humanitarian aid has seen advancements from traditional ways to new methods, such as using drones for collecting data. However, we have not yet answered the key questions: who is really benefiting from the learning results and who is accountable?
I have seen many evaluation wrap-up meetings happening in capital cities, mainly targeted at public figures/leaders, but this was the first time I saw evaluators sharing findings with the people who had benefitted from the project and who will support the scaling up of that learning for deeper and long-lasting impact on the ground.
This blog post is about a recent experience of sharing the findings of a final evaluation with survey respondents in rural Pakistan. The project evaluated is Y Care International’s “Improving financial resilience and promoting gender equity of disadvantaged young women in marginalised communities of Umerkot, Pakistan”.
Since the project was aimed at making people change-agents rather than just recipients of aid, the evaluation design purposely included a validation workshop to share the findings with communities, so they could learn from it and understand the importance of monitoring and evaluation to provide solid evidence of project results.
At the end of the meeting, community members were aware of what worked and what didn’t in the project, as well as the best possible ways for future improvements. One participant said, ‘this validation workshop was special, since it was the first time after a survey that the evaluation team shared the results with us’.
Let us start with organisation and budgeting. If you do not have the money you cannot organise a field-level evaluation workshop. I am not referring to big budgets, just enough set aside in the design of the project to pay the travel of an evaluator and the logistics. A validation workshop is much more useful for beneficiaries than a 60-page evaluation report: the usefulness of a report is limited as they are usually not translated into local languages and seldom reach the project target groups.
The starting point is preparation of the workshop agenda. Gather the necessary tools, prepare sessions according to the key evaluation questions and adapt them according to the participants’ differences and needs (e.g., culture, language etc.). To ensure active participation we used energisers, ice-breakers and team building techniques. We assigned note-takers and facilitators in advance.
We gathered participants from eight villages in two locations in Umerkot and invited them to one workshop with the frontline staff of the project. When selecting which village would host the meeting we considered the availability of local transport from neighbouring villages, availability of a meeting space as well as interest and willingness of the community leaders.
People are busy in remote and poor rural villages; therefore, you have a time constraint to address. Select your most relevant findings and present them concisely. We converted the data into reader friendly graphs and designed simple questions to allow easy understanding of the findings. We observed that people had fun and got engaged. The community voices helped the evaluators to further make sense of the data.
We encouraged women, young people and the poorest in the community to participate in the workshop so that “no one is left behind”…. As a result, we found that more women participated than men and they were active in the dialogues on the best way forward.
To overcome the language barrier and to engage everyone in the discussion, the workshop was organised in the local language, Sindhi. The lead organiser of the workshop also spoke the local language.
The evaluation used Outcome Harvesting and Sprockler, a method and a mobile data collection tool. The harvested outcomes were shared with the beneficiaries by the facilitators and the use of Sprockler provided immediate and ready to use information. The graphs were made by small colour coded dots that represents individuals and their most significant change stories.
To make sense of the data gathered during the workshop sessions, we also asked the participants to comment on the results of the evaluation to better understand the causalities of impact. For instance, women from non-Muslim communities (which are the poorest) reported a change in how they are perceived in the community because of the project; they have more resources, better control over them and increased mobility which in turn allowed better access to the markets and to productive resources. Women have also been viewed as role models in the villages.
Moreover, the workshop provided the opportunity to share lessons learnt from the project with the wider community, including non-beneficiaries. In the case of economic resilience for example, it was beneficial for those who weren’t doing well to see the reasons why others were succeeding in improving their livelihoods and/or any areas for improvement.
In conclusion, a validation workshop in the field with beneficiaries allows us to close the learning loop, by extending the ownership of evaluation findings, thus empowering communities and building trust with the implementation agencies. It draws attention to the importance of establishing solid Monitoring & Evaluation systems and on the learning those systems generate, which is key to inform the design of future development interventions towards better and deeper impact on the ground.