understand – COE-Nepal https://coe-nepal.org.np/repository Online Repository Mon, 09 Oct 2017 07:38:50 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 https://coe-nepal.org.np/repository/wp-content/uploads/2017/09/coe-logo-150x150.png understand – COE-Nepal https://coe-nepal.org.np/repository 32 32 Understanding Causes https://coe-nepal.org.np/repository/understanding-causes/ Mon, 09 Oct 2017 07:38:50 +0000 http://repository.coe-nepal.org.np/?p=233 […]]]>

Most evaluations need to investigate what is causing the outcomes and impacts of an intervention. (Some process evaluations assume that certain activities are contributing to intended outcomes without investigating these).

Sometimes it is useful to think about this in terms of ‘causal attribution’ – did the intervention cause the outcomes and impacts that have been observed?  In many cases, however, the outcomes and impacts have been caused by a combination of programs, or by a program in combination with other factors.

In such cases it can be more useful to think about “causal contribution” – did the intervention contribute to the outcomes and impacts that have been observed?

Tasks

  1. Check the results support causal attribution

One strategy for causal inference is to check that the data are consistent with what we would expect if the intervention were being effective? This involves not only whether or not results occurred, but their timing and specificity.

  1. Compare the results to the counterfactual

Another strategy is to assess the impact of an intervention is to compare it to an estimate of what would have happened without the intervention.  Options include the use of control groups, comparison groups and expert predictions.

  1. Investigate possible alternative explanations

A third strategy is to identify other factors that might have caused the impacts and see if it is possible to rule them out.

Resources

Recorded webinar: Jane Davidson’s overview of options for causal inference in a 20 minute webinar in the American Evaluation Association’s Coffee Break series.  Free to all, including non-members.

 

]]>
Checking the results support causal attribution https://coe-nepal.org.np/repository/checking-the-results-support-causal-attribution/ Mon, 09 Oct 2017 07:38:26 +0000 http://repository.coe-nepal.org.np/?p=231 […]]]>

One of the tasks involved in understanding causes is to check whether the observed results are consistent with a cause-effect relationship between the intervention and the observed impacts.

Some of the options for this task involve an analysis of existing data and some involve additional data collection. It is often appropriate to use several options in a single evaluation. Most impact evaluations should include some options that address this task.

Options

Gathering additional data

  • Key Informants Attribution:providing evidence that links participation plausibly with observed changes.
  • Modus operandi: drawing on the previous experience of participants and stakeholders to determine what constellation or pattern of effects is typical for an initiative.
  • Process tracing: focusing on the use of clues within a case (causal-process observations, CPOs) to adjudicate between alternative possible explanations.

Analysis

Approaches

These approaches combine some of the above options together with ruling out possible alternative explanations.

  • Contribution Analysis: assessing whether the program is based on a plausible theory of change, whether it was implemented as intended, whether the anticipated chain of results occurred and the extent to which other factors influenced the program’s achievements.
  • Collaborative Outcomes Reporting: mapping existing data against the theory of change, and then using a combination of expert review and community consultation to check for the credibility of the evidence.
  • Multiple Lines and Levels of Evidence (MLLE): reviewing a wide range of evidence from different sources to identify consistency with the theory of change and to explain any exceptions.
  • Rapid Outcomes Assessment: assessing and mapping the contribution of a project’s actions on a particular change in policy or the policy environment.
]]>
Comparing Results to the Counterfactual https://coe-nepal.org.np/repository/comparing-results-to-the-counterfactual/ Mon, 09 Oct 2017 07:38:03 +0000 http://repository.coe-nepal.org.np/?p=229 […]]]>

One of the three tasks involved in understanding causes is to compare the observed results to those you would expect if the intervention had not been implemented – this is known as the ‘counterfactual’.

Many discussions of impact evaluation argue that it is essential to include a counterfactual.  Some people however argue that in turbulent, complex situations, it can be impossible to develop an accurate estimate of what would have happened in the absence of an intervention, since this absence would have affected the situation in ways that cannot be predicted. In situations of rapid and unpredictable change, when it might not be possible to construct a credible counterfactual it might be possible to build a strong, empirical case that an intervention produced certain impacts, but not to be sure about what would have happened if the intervention had not been implemented.

For example, it might be possible to show that the development of community infrastructure for raising fish for consumption and sale was directly due to a local project, without being able to confidently state that this would not have happened in the absence of the project (perhaps through an alternative project being implemented by another organization).

Options

There are three clusters of options for this task:

Experimental options (or research designs)

Develop a counterfactual using a control group. Randomly assign participants to either receive the intervention or to be in a control group.

  • Control Group: a group created through random assignment who do not receive a program, or receive the usual program when a new version is being evaluated. An essential element of theRandomized Controlled Trial approach to impact evaluation.

Quasi-experimental options (or research designs)

Develop a counterfactual using a comparison group which has not been created by randomization.

  • Difference-in-Difference (or Double Difference): comparing the before-and-after difference for the group receiving the intervention (where they have not been randomly assigned) to the before-after difference for those who did not.
  • Instrumental variables: estimating the causal effect of an intervention.
  • Judgemental matching:involves creating a comparison group by finding a match for each person or site in the treatment group based on researcher judgements about what variables are important.
  • Matched comparisons: matching participants (individuals, organizations or communities) with a non-participant on variables that are thought to be relevant.
  • Propensity scores: statistically creating comparable groups based on an analysis of the factors that influenced people’s propensity to participate in the program.
  • Regression Discontinuity: comparing the outcomes of individuals just below the cut-off point with those just above the cut-off point.
  • Sequential allocation:a treatment group and a comparison group are created by sequential allocation (e.g. every 3rd person on the list).
  • Statistically created counterfactual: developing a statistical model, such as a regression analysis, to estimate what would have happened in the absence of an intervention.

Non-experimental options

Develop a hypothetical prediction of what would have happened in the absence of the intervention.

Approaches

  • Randomized controlled trial (RCT): creates a control group and compares this to one or more treatment groups to produce an unbiased estimate of the net effect of the intervention.

 

]]>
Investigating Possible Alternative Explanations https://coe-nepal.org.np/repository/investigating-possible-alternative-explanations/ Mon, 09 Oct 2017 07:37:33 +0000 http://repository.coe-nepal.org.np/?p=227 […]]]>

All impact evaluations should include some attention to identifying and (if possible) ruling out alternative explanations for the impacts that have been observed.

Options

  • Force Field Analysis: providing a detailed overview of the variety of forces that may be acting on an organizational change issue.
  • General Elimination Methodology:this involves identifying alternative explanations and then systematically investigating them to see if they can be ruled out.
  • Key informant:asking experts in these types of programmes or in the community to identify other possible explanations and/or to assess whether these explanations can be ruled out.
  • Process tracing: ruling out alternative explanatory variables at each step of the theory of change.
  • RAPID outcomes assessment:a methodology to assess and map the contribution of a project’s actions on a particular change in policy or the policy environment.
  • Ruling out technical explanations:identifying and investigating possible ways that the results might reflect technical limitations rather than actual causal relationships.
  • Searching for disconfirming evidence/Following up exceptions:Treating data that doesn’t fit the expected pattern not as outliers but as potential clues to other causal factors and then seeking to explain them.
  • Statistically controlling for extraneous variables: collecting data on the extraneous variables, as well as the independent and dependent variables is an option for removing the influence of the variable on the study of program results.

Approaches

These approaches combine ruling out possible alternative explanations with options to check the results support causal attribution.

  • Contribution Analysis: assessing whether the program is based on a plausible theory of change, whether it was implemented as intended, whether the anticipated chain of results occurred and the extent to which other factors influenced the program’s achievements.
  • Collaborative Outcomes Reporting: mapping existing data against the theory of change, and then using a combination of expert review and community consultation to check for the credibility of the evidence.
  • Multiple Lines and Levels of Evidence(MLLE): reviewing a wide range of evidence from different sources to identify consistency with the theory of change and to explain any exceptions.
  • Rapid Outcomes Assessment: assessing and mapping the contribution of a project’s actions on a particular change in policy or the policy environment.
]]>