COE-Nepal https://coe-nepal.org.np/repository Online Repository Mon, 09 Oct 2017 07:38:50 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.14 https://coe-nepal.org.np/repository/wp-content/uploads/2017/09/coe-logo-150x150.png COE-Nepal https://coe-nepal.org.np/repository 32 32 Understanding Causes https://coe-nepal.org.np/repository/understanding-causes/ Mon, 09 Oct 2017 07:38:50 +0000 http://repository.coe-nepal.org.np/?p=233

Most evaluations need to investigate what is causing the outcomes and impacts of an intervention. (Some process evaluations assume that certain activities are contributing to intended outcomes without investigating these).

Sometimes it is useful to think about this in terms of ‘causal attribution’ – did the intervention cause the outcomes and impacts that have been observed?  In many cases, however, the outcomes and impacts have been caused by a combination of programs, or by a program in combination with other factors.

In such cases it can be more useful to think about “causal contribution” – did the intervention contribute to the outcomes and impacts that have been observed?

Tasks

  1. Check the results support causal attribution

One strategy for causal inference is to check that the data are consistent with what we would expect if the intervention were being effective? This involves not only whether or not results occurred, but their timing and specificity.

  1. Compare the results to the counterfactual

Another strategy is to assess the impact of an intervention is to compare it to an estimate of what would have happened without the intervention.  Options include the use of control groups, comparison groups and expert predictions.

  1. Investigate possible alternative explanations

A third strategy is to identify other factors that might have caused the impacts and see if it is possible to rule them out.

Resources

Recorded webinar: Jane Davidson’s overview of options for causal inference in a 20 minute webinar in the American Evaluation Association’s Coffee Break series.  Free to all, including non-members.

 

]]>
Checking the results support causal attribution https://coe-nepal.org.np/repository/checking-the-results-support-causal-attribution/ Mon, 09 Oct 2017 07:38:26 +0000 http://repository.coe-nepal.org.np/?p=231

One of the tasks involved in understanding causes is to check whether the observed results are consistent with a cause-effect relationship between the intervention and the observed impacts.

Some of the options for this task involve an analysis of existing data and some involve additional data collection. It is often appropriate to use several options in a single evaluation. Most impact evaluations should include some options that address this task.

Options

Gathering additional data

  • Key Informants Attribution:providing evidence that links participation plausibly with observed changes.
  • Modus operandi: drawing on the previous experience of participants and stakeholders to determine what constellation or pattern of effects is typical for an initiative.
  • Process tracing: focusing on the use of clues within a case (causal-process observations, CPOs) to adjudicate between alternative possible explanations.

Analysis

Approaches

These approaches combine some of the above options together with ruling out possible alternative explanations.

  • Contribution Analysis: assessing whether the program is based on a plausible theory of change, whether it was implemented as intended, whether the anticipated chain of results occurred and the extent to which other factors influenced the program’s achievements.
  • Collaborative Outcomes Reporting: mapping existing data against the theory of change, and then using a combination of expert review and community consultation to check for the credibility of the evidence.
  • Multiple Lines and Levels of Evidence (MLLE): reviewing a wide range of evidence from different sources to identify consistency with the theory of change and to explain any exceptions.
  • Rapid Outcomes Assessment: assessing and mapping the contribution of a project’s actions on a particular change in policy or the policy environment.
]]>
Comparing Results to the Counterfactual https://coe-nepal.org.np/repository/comparing-results-to-the-counterfactual/ Mon, 09 Oct 2017 07:38:03 +0000 http://repository.coe-nepal.org.np/?p=229

One of the three tasks involved in understanding causes is to compare the observed results to those you would expect if the intervention had not been implemented – this is known as the ‘counterfactual’.

Many discussions of impact evaluation argue that it is essential to include a counterfactual.  Some people however argue that in turbulent, complex situations, it can be impossible to develop an accurate estimate of what would have happened in the absence of an intervention, since this absence would have affected the situation in ways that cannot be predicted. In situations of rapid and unpredictable change, when it might not be possible to construct a credible counterfactual it might be possible to build a strong, empirical case that an intervention produced certain impacts, but not to be sure about what would have happened if the intervention had not been implemented.

For example, it might be possible to show that the development of community infrastructure for raising fish for consumption and sale was directly due to a local project, without being able to confidently state that this would not have happened in the absence of the project (perhaps through an alternative project being implemented by another organization).

Options

There are three clusters of options for this task:

Experimental options (or research designs)

Develop a counterfactual using a control group. Randomly assign participants to either receive the intervention or to be in a control group.

  • Control Group: a group created through random assignment who do not receive a program, or receive the usual program when a new version is being evaluated. An essential element of theRandomized Controlled Trial approach to impact evaluation.

Quasi-experimental options (or research designs)

Develop a counterfactual using a comparison group which has not been created by randomization.

  • Difference-in-Difference (or Double Difference): comparing the before-and-after difference for the group receiving the intervention (where they have not been randomly assigned) to the before-after difference for those who did not.
  • Instrumental variables: estimating the causal effect of an intervention.
  • Judgemental matching:involves creating a comparison group by finding a match for each person or site in the treatment group based on researcher judgements about what variables are important.
  • Matched comparisons: matching participants (individuals, organizations or communities) with a non-participant on variables that are thought to be relevant.
  • Propensity scores: statistically creating comparable groups based on an analysis of the factors that influenced people’s propensity to participate in the program.
  • Regression Discontinuity: comparing the outcomes of individuals just below the cut-off point with those just above the cut-off point.
  • Sequential allocation:a treatment group and a comparison group are created by sequential allocation (e.g. every 3rd person on the list).
  • Statistically created counterfactual: developing a statistical model, such as a regression analysis, to estimate what would have happened in the absence of an intervention.

Non-experimental options

Develop a hypothetical prediction of what would have happened in the absence of the intervention.

Approaches

  • Randomized controlled trial (RCT): creates a control group and compares this to one or more treatment groups to produce an unbiased estimate of the net effect of the intervention.

 

]]>
Investigating Possible Alternative Explanations https://coe-nepal.org.np/repository/investigating-possible-alternative-explanations/ Mon, 09 Oct 2017 07:37:33 +0000 http://repository.coe-nepal.org.np/?p=227

All impact evaluations should include some attention to identifying and (if possible) ruling out alternative explanations for the impacts that have been observed.

Options

  • Force Field Analysis: providing a detailed overview of the variety of forces that may be acting on an organizational change issue.
  • General Elimination Methodology:this involves identifying alternative explanations and then systematically investigating them to see if they can be ruled out.
  • Key informant:asking experts in these types of programmes or in the community to identify other possible explanations and/or to assess whether these explanations can be ruled out.
  • Process tracing: ruling out alternative explanatory variables at each step of the theory of change.
  • RAPID outcomes assessment:a methodology to assess and map the contribution of a project’s actions on a particular change in policy or the policy environment.
  • Ruling out technical explanations:identifying and investigating possible ways that the results might reflect technical limitations rather than actual causal relationships.
  • Searching for disconfirming evidence/Following up exceptions:Treating data that doesn’t fit the expected pattern not as outliers but as potential clues to other causal factors and then seeking to explain them.
  • Statistically controlling for extraneous variables: collecting data on the extraneous variables, as well as the independent and dependent variables is an option for removing the influence of the variable on the study of program results.

Approaches

These approaches combine ruling out possible alternative explanations with options to check the results support causal attribution.

  • Contribution Analysis: assessing whether the program is based on a plausible theory of change, whether it was implemented as intended, whether the anticipated chain of results occurred and the extent to which other factors influenced the program’s achievements.
  • Collaborative Outcomes Reporting: mapping existing data against the theory of change, and then using a combination of expert review and community consultation to check for the credibility of the evidence.
  • Multiple Lines and Levels of Evidence(MLLE): reviewing a wide range of evidence from different sources to identify consistency with the theory of change and to explain any exceptions.
  • Rapid Outcomes Assessment: assessing and mapping the contribution of a project’s actions on a particular change in policy or the policy environment.
]]>
Analysing Data https://coe-nepal.org.np/repository/analysing-data/ Mon, 09 Oct 2017 07:36:44 +0000 http://repository.coe-nepal.org.np/?p=225

Analysing data to summarise it and look for patterns is an important part of every evaluation. The options for doing this have been grouped into two categories – quantitative data (number) and qualitative data (text, images).

Options

Numeric analysis

Analysing numeric data such as cost, frequency, physical characteristics.

  • Correlation: a statistical measure ranging from +1.0 to -1.0 that indicates how strongly two or more variables are related. A positive correlation (+1.0 to 0) indicates that two variables will either increase or decrease together, while a negative correlation (0 to -1.0) indicates that as one variable increases, the other will decrease.
  • Crosstabulations: using contingency tables of two or more dimensions to indicate the relationship between nominal (categorical) variables. In a simple crosstabulation, one variable occupies the horizontal axis and another the vertical. The frequencies of each are added in the intersecting squares and displayed as percentages of the whole, illustrating relationships in the data.
  • Data mining: computer-driven automated techniques that run through large amounts of text or data to find new patterns and information.
  • Exploratory Techniques:taking a ‘first look’ at a dataset by summarising its main characteristics, often by using visual methods.
  • Frequency tables: a visual way of summarizing nominal and ordinal data by displaying the count of observations (times a value of a variable occurred) in a table.
  • Measures of central tendency:a summary measure that attempts to describe a whole set of data with a single value that represents the middle or centre of its distribution. The mean (the average value), median (the middle value) and mode (the most frequent value) are all measures of central tendency. Each measure is useful for different conditions.
  • Measures of dispersion:a summary measure that provides information about how much variation there is in the data, including the range, inter-quartile range and the standard deviation.
  • Multivariate descriptive: providing simple summaries of (large amounts of) information (or data) with two or more related variables.
  • Multiple regression
  • Factor analysis
  • Cluster analysis
  • Structural equation modelling
  • Non-Parametric inferential statistics: methods for inferring conclusions about a population from a sample’s data that are flexible and do not follow a normal distribution (ie, the distribution does not parallel a bell curve), including ranking: the chi-square test, binomial test and Spearman’s rank correlation coefficient.
  • Parametric inferential statistics: methods for inferring conclusions about a population from a sample’s data that follows certain parameters: the data will be normal (ie, the distribution parallels the bell curve); numbers can be added, subtracted, multiplied and divided; variances are equal when comparing two or more groups; and the sample should be large and randomly selected.
  • Summary statistics: providing a quick summary of data which is particularly useful for comparing one project to another, before and after.
  • Time series analysis: observing well-defined data items obtained through repeated measurements over time.

Textual analysis

Analysing words, either spoken or written, including questionnaire responses, interviews, and documents.

  • Content analysis: reducing large amounts of unstructured textual content into manageable data relevant to the (evaluation) research questions.
  • Thematic coding: recording or identifying passages of text or images that are linked by a common theme or idea allowing the indexation of text into categories.
  • Framework matrices:a method for summarising and analysing qualitative data in a two-by-two matrix table. It allows for sorting data across case and by theme.
  • Timelines and time-ordered matrices:aids analysis by allowing for visualisation of key events, sequences and results.

Resources

Websites

WISE: Web Interface for Statistics Education: This website organises a large amount of statistics resources into one central place. It is also home to a series of interactive, sequenced tutorials on key statistical concepts. On WISE, you can find WISE tutorials, WISE applets, excel downloads, teaching papers, quick guides, and publications.

Tools

For an overview of specialist tools for qualitative data analysis, see the CAQDAS site at the University of Surrey which compares ten packages including Atlas.Ti, HyperResearch and NVivo.

 

 

]]>
Synthesize https://coe-nepal.org.np/repository/synthesize/ Mon, 09 Oct 2017 07:35:20 +0000 http://repository.coe-nepal.org.np/?p=223

Bringing together data into an overall conclusion and judgement is important for individual evaluations and also when summarising evidence from multiple evaluations.

Tasks

1. Synthesise data from a single evaluation

An evaluation needs to produce an overall judgement of merit or worth, bringing together data in terms of the agreed evaluative criteria and standards.

2. Synthesise data across evaluations

Data from multiple evaluations can also be synthesised to produce an overall judgement about ‘what works’ or ‘what works for whom in what circumstances’.

3. Generalise findings

It is often useful for an evaluation to be explicit about the extent to which its findings can be generalised or how they might be appropriately translated to new sites and situations.

]]>
Synthesizing data from a single evaluation https://coe-nepal.org.np/repository/synthesizing-data-from-a-single-evaluation/ Mon, 09 Oct 2017 07:35:02 +0000 http://repository.coe-nepal.org.np/?p=221

To develop evaluative judgments, the evaluator draws data from the evaluation and systematically synthesizes and values the data. There are a range of options that can be used for synthesis and valuing.

Options

Processes

  • Consensus Conference: a process where a selected group of lay people (non-experts) representing the community are briefed, consider the evidence and prepare a joint finding and recommendation
  • Expert Panel: a process where a selected group of experts consider the evidence and prepare a joint finding

Techniques

  • Cost Benefit Analysis: compares costs to benefits, both expressed in monetary units
  • Cost-Effectiveness Analysis: compares costs to the outcomes expressed in terms of a standardized unit (eg additional years of schooling)
  • Cost Utility Analysis:a particular type of cost-effectiveness analysis that expresses benefits in terms of a standard unit such as Quality Adjusted Life Years
  • Lessons learnt:Lessons learnt can develop out of the evaluation process as evaluators reflect on their experiences in undertaking the evaluation.
  • Multi-Criteria Analysis:a systematic process to address multiple criteria and perspectives
  • Numeric Weighting:developing numeric scales to rate performance against each evaluation criterion and then add them up for a total score.
  • Qualitative Weight and Sum:using qualitative ratings (such as symbols) to identify performance in terms of essential, important and unimportant criteria
  • Rubrics: using a descriptive scale for rating performance that incorporates performance across a number of criteria
  • Value for Money: a term used in different ways, including as a synonym for cost-effectiveness, and as systematic approach to considering these issues throughout planning and implementation, not only in evaluation.

Approaches:

Social Return on Investment:

]]>
Synthesizing Data Across Evaluations https://coe-nepal.org.np/repository/synthesizing-data-across-evaluations/ Mon, 09 Oct 2017 07:34:46 +0000 http://repository.coe-nepal.org.np/?p=219

These options answer questions about a type of intervention rather than about a single case – questions such as “Do these types of interventions work?” or “For whom, in what ways and under what circumstances do they work?” The task involves locating the evidence (often involving bibliographic searches of databases, with particular emphasis on finding unpublished studies), assessing its quality and relevance in order to decide whether or not to include it, extracting the relevant information, and synthesizing it.  Different options use different strategies and have different definitions of what constitutes credible evidence.

Options

  • Best evidence synthesis: a synthesis that, like a realist synthesis, draws on a wide range of evidence (including single case studies) and explores the impact of context, and also builds in an iterative, participatory approach to building and using a knowledge base.
  • Lessons learnt:Lessons learnt can develop out of the evaluation process as evaluators reflect on their experiences in undertaking the evaluation.
  • Meta-analysis:  a statistical method for combining numeric evidence from experimental (and sometimes quasi-experimental studies) to produce a weighted average effect size.
  • Meta-ethnography: a method for combining data from qualitative evaluation and research, especially ethnographic data, by translating concepts and metaphors across studies.
  • Rapid evidence assessment:a process that is faster and less rigorous than a full systematic review but more rigorous than ad hoc searching, it uses a combination of key informant interviews and targeted literature searches to produce a report in a few days or a few weeks.
  • Realist synthesis: synthesizing all relevant existing research in order to make evidence-based policy recommendations.
  • Systematic review: a synthesis that takes a systematic approach to searching, assessing, extracting and synthesizing evidence from multiple studies.  Meta-analysis, meta-ethnography and realist synthesis are different types of systematic review.
  • Textual narrative synthesis:dividing the studies into relatively homogenous groups, reporting study characteristics within each group, and articulating broader similarities and differences among the groups.
  • Vote counting: comparing the number of positive studies (studies showing benefit) with the number of negative studies (studies showing harm).

Resources

  • Campbell Collaboration
  • Evidence for Policy and Practice Information Centre (EPPI-Centre),University of London
  • Presentationsfrom the 3IE Dha
]]>
Generalise Findings https://coe-nepal.org.np/repository/generalise-findings/ Mon, 09 Oct 2017 07:34:27 +0000 http://repository.coe-nepal.org.np/?p=217

An evaluation usually involves some level of generalising of the findings to other times, places or groups of people.

For many evaluations, this simply involves generalising from data about the current situation or the recent past to the future.

For example, an evaluation might report that a practice or program has been working well (finding), therefore it is likely to work well in the future (generalisation), and therefore we should continue to do it (recommendation). In this case, it is important to understand whether or not future times are likely to be similar to the time period of the evaluation.  If the program had been successful because of support from another organisation, and this support was not going to continue, then it would not be correct to assume that the program would continue to succeed in the future.

For some evaluations, there are other types of generalising needed.  Impact evaluations which aim to learn from the evaluation of a pilot to make recommendations about scaling up must be clear about the situations and people to whom results can be generalised.

There are often two levels of generalisation.  For example, an evaluation of a new nutrition program in Ghana collected data from a random sample of villages. This allowed statistical generalisation to the larger population of villages in Ghana.  In addition, because there was international interest in the nutrition program, many organisations, including governments in other countries, were interested to learn from the evaluation for possible implementation elsewhere.

Options

  • Analytical generalisation:making projections about the likely transferability of findings from an evaluation, based on a theoretical analysis of the factors producing outcomes and the effect of context. Realist evaluation can be particularly important for this.
  • Statistical generalisation:statistically calculating the likely parameters of a population using data from a random sample of that population.

Approaches

  • Horizontal Evaluation:An approach that combines self-assessment by local participants and external review by peers
  • Positive Deviance: Involves intended evaluation users in identifying ‘outliers’ – those with exceptionally good outcomes – and understanding how they have achieved these.
  • Realist Evaluation: Analyses the contexts within which causal mechanisms produce particular outcomes, making it easier to predict where results can be generalised.

 

]]>
Report and Support Use https://coe-nepal.org.np/repository/report-and-support-use/ Mon, 09 Oct 2017 07:33:40 +0000 http://repository.coe-nepal.org.np/?p=215

From the first step of the evaluation process, even though it may be one of the last evaluation tasks, explicitly discuss the content, sharing, and use of reports during the initial planning of the evaluation and return to the discussion thereafter. Most importantly, identify who your primary intended users are. Use of the evaluation often depends on how well the report meets the needs and learning gaps of the primary intended users.

Besides the primary intended users (identified as part of framing the evaluation), your findings can be communicated to others for different reasons. For example, lessons learned from the evaluation can be helpful to other evaluators or project staff working in the same field; or it may be worthwhile remolding some of the findings into articles or stories to attract wider attention to an organisations’ work, or to spread news about a particular situation.

You will share the findings of the evaluation with the primary intended users and also other evaluation stakeholders.

Don’t limit yourself to thinking of sharing evaluation findings through a report. Although a final evaluation report is important it is not the only way to distribute findings. Depending on your audience and budget, it may be important to consider different ways of delivering evaluation findings:

  • Presenting findings at staff forums and subject matter conferences
  • Developing a short video version of findings
  • Sharing findings on the organisation intra-net
  • Sharing stories, pictures and drawings from the evaluation (depending on what options you have used to gather data)
  • Creating large posters or infographics of findings for display
  • Producing a series of short memos

Tasks

Tasks related to this component include:

  1. Identify Reporting Requirements:

Identify the primary intended stakeholders and determine their reporting needs, including their decision-making timelines. Develop a communication plan.

  1. Develop Reporting Media

Produce the written, visual, and verbal products that represent the program and its evaluation according to the communication plan. Graphic design and data visualization can be applied to emphasize key pieces of content and increase primary intended user engagement.

  1. Ensure Accessibility

Review the reporting products to make sure they are accessible for those who are colorblind, low-vision, or reliant on an audio reader.

  1. Develop Recommendations

If part of the evaluation brief make recommendations, on the basis of the evaluation findings, about how the program can be improved, how the risk of program failure can be reduced or whether the program should continue.

  1. Support Use

Communicate the findings and recommendations but don’t stop there. As primary intended users reflect on the evaluation, facilitate the review to gather their feedback and guide their interpretations. Plan ways and time to check in on progress toward improvement. Look for opportunities to share the unique aspects of the program and its evaluation to external audiences.

 

]]>