Five steps to measuring unintended effects of development interventions

Author: Hur Hassnain

Previous #Evalcrisis blog posts have discussed the pros and cons of utilizing ICT tools to conduct data collection remotely during the current crisis as well in hard to reach contexts. They have also highlighted ways in which potential risks can be mitigated. To follow, this blog highlights the importance of including the identification and measurement of unintended effects in monitoring and evaluation activities especially during the current global crisis caused by the COVID-19 which is likely to exacerbate and prolong existing crises and conflicts.  

What is meant by unintended effects?

Identification of unintended effects of an intervention is widely endorsed by the Development Assistance Committee (DAC) of the OECD, which defines impacts as (OECD-DAC, 2002: 24) “positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended”. Two brief examples describing the unintended consequences of an intervention are available on the Better Evaluation webpage.

What can happen if un-intended effects and consequences are overlooked?

Focusing only on expected results can easily lead to overlooking unexpected or unintended effects, both positive and negative. In particular, the consequences of negative unintended effects can deeply affect contexts affected by Fragility, Conflict and Violent (FCV) especially in the current time of global crisis.

So, how can we ensure that evaluation frameworks include the measurement of unintended effects and consequences?

Below are a few key steps which can be taken to ensure that evaluations account for the unintended effects and consequences of an intervention:

1) Include the assessment of the unintended effects in the Evaluation design through the theory of change

The definition of the OECD DAC criteria by its nature calls for investing time and resources in this direction.

The evaluation ToR should explicitly indicate in specific and easy to understand terms the side-effects to be identified if they are known to the programme already. If not known, an evaluation can ‘reveal’ and unpack the unintended effects.

Development and humanitarian interventions should identify and define potential risks at the design stage in the Theory of Change (ToC) or in the logical framework. However, when the design is weak (e.g., in absence of a theory of change or a robust logical framework) and/or when the risk analysis is missing, generic, incomplete or out of date, an evaluator needs to adopt a broader lens when scanning for unintended effects. In these cases, re-constructing ex-post the Theory of Change including analyzing the risks and unintended positive and negative effects of the intervention is good practice, especially if the context has been changing and/or the intervention had the chance to affect the political or power equilibrium, conflict or violence.

2) Adaptive management actions undertaken during programme implementation should inform the evaluation design

Monitoring activities of an aid agency compared to evaluation has proven to be beneficial in tapping on the unintended consequences of an intervention and support adaptive management. If unintended effects of an intervention are identified during implementation, programmes can be adapted quickly to minimise harm and maximise benefits.

In light of the above, it is useful to collect examples of how monitoring data supported adaptive programming by informing decision-makers, especially if it resulted in timely programme improvements. This will point you towards an understanding of the program’s unintended effects.

To assess the extent to which the programme was adaptive e.g. in the face of conflict and a rapidly changing context, you can include the following questions in your evaluation framework:

  • Was a monitoring system established and locally owned?
  • Was the intervention affected by exogenous shocks?
  • Were the intervention goals, indicators and targets revised with relevant stakeholders during implementation and as the context changed?
  • Has the funder and other key stakeholders been understanding and flexible?
  • Were the monitoring tools and resources bottom-up or co-produced?
  • Was an enabling environment for learning developed?
  • Did the project monitor increasing gender and protection related risks? For example, monitoring increases in forced recruitment in armed forces, or increase/decrease in sexual or domestic violence.
  • Were local communities’ part of or participants of the monitoring activities?
  • Did the intervention establish a strong feedback mechanism that was analysed and responded to assess the interaction between the intervention and the context?

3) Identify your data sources upfront….

Invest time in identifying the institutions operating in the context where the evaluation is being conducted as they know the context well and could be a source of data and information about the unintended effects that the evaluation is studying. These institutions include but are not limited to local and national think tanks, academic institutions, civil society organizations, UN bodies and the national statistical offices.

4) and consult stakeholders during implementation:

Engage and consult stakeholders to specifically look at the unintended effects in light of how change did/didn’t happen as a result of the intervention.

5) Use mixed methods, participatory tools, and/or emergent approaches

Questions on unintended effects should typically be integrated within most or all of the data collection instruments used in an evaluation, albeit in different ways. For example, surveys can be a useful way to identify potential unintended effects, which can later be validated and interpreted during interviews and focus group discussions. It often takes some trial and error to find the best way to phrase questions on unintended effects within a particular context, so be sure to pilot test and adapt the instruments. Note also that data on unintended effects often arises in answer to more generalized questions, e.g. ‘Was there anything about this program that surprised you?’ or ‘Was there any aspects of the program that did not go as well as hoped?’

Going beyond the design of tools and instruments, if there is higher-level flexibility in the evaluation plan to consider the use of emergent approaches, it should be noted that goal-free evaluation approaches such as Outcome Harvesting, and Most Significant Change can also be particularly useful for identifying unintended effects. This is because they are not based on the assumption of a certain set of expected results; instead, they simply explore what changes have occurred in the context, and how the program in question contributed to those changes.

Further, Developmental Evaluation can be helpful if it assists implementers to identify and adapt to unintended effects in ‘real time,’ rather than waiting for an end-of-project evaluation. The identification of unintended effects is always most beneficial when it leads to learning that improves outcomes and reduces risk for future participants of the same program, or others like it.

Disclaimer: Views expressed here are those of the authors and do not necessarily reflect the opinion or position of their employers.

This blog was first published by Capacity4Dev, the European Commission’s knowledge sharing platform: https://europa.eu/capacity4dev/devco-ess/news/evalcrisis-blog-05-five-steps-measuring-unintended-effects-development-interventions