synthesise – COE-Nepal https://coe-nepal.org.np/repository Online Repository Mon, 09 Oct 2017 07:35:20 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.18 https://coe-nepal.org.np/repository/wp-content/uploads/2017/09/coe-logo-150x150.png synthesise – COE-Nepal https://coe-nepal.org.np/repository 32 32 Synthesize https://coe-nepal.org.np/repository/synthesize/ Mon, 09 Oct 2017 07:35:20 +0000 http://repository.coe-nepal.org.np/?p=223 […]]]>

Bringing together data into an overall conclusion and judgement is important for individual evaluations and also when summarising evidence from multiple evaluations.

Tasks

1. Synthesise data from a single evaluation

An evaluation needs to produce an overall judgement of merit or worth, bringing together data in terms of the agreed evaluative criteria and standards.

2. Synthesise data across evaluations

Data from multiple evaluations can also be synthesised to produce an overall judgement about ‘what works’ or ‘what works for whom in what circumstances’.

3. Generalise findings

It is often useful for an evaluation to be explicit about the extent to which its findings can be generalised or how they might be appropriately translated to new sites and situations.

]]>
Synthesizing data from a single evaluation https://coe-nepal.org.np/repository/synthesizing-data-from-a-single-evaluation/ Mon, 09 Oct 2017 07:35:02 +0000 http://repository.coe-nepal.org.np/?p=221 […]]]>

To develop evaluative judgments, the evaluator draws data from the evaluation and systematically synthesizes and values the data. There are a range of options that can be used for synthesis and valuing.

Options

Processes

  • Consensus Conference: a process where a selected group of lay people (non-experts) representing the community are briefed, consider the evidence and prepare a joint finding and recommendation
  • Expert Panel: a process where a selected group of experts consider the evidence and prepare a joint finding

Techniques

  • Cost Benefit Analysis: compares costs to benefits, both expressed in monetary units
  • Cost-Effectiveness Analysis: compares costs to the outcomes expressed in terms of a standardized unit (eg additional years of schooling)
  • Cost Utility Analysis:a particular type of cost-effectiveness analysis that expresses benefits in terms of a standard unit such as Quality Adjusted Life Years
  • Lessons learnt:Lessons learnt can develop out of the evaluation process as evaluators reflect on their experiences in undertaking the evaluation.
  • Multi-Criteria Analysis:a systematic process to address multiple criteria and perspectives
  • Numeric Weighting:developing numeric scales to rate performance against each evaluation criterion and then add them up for a total score.
  • Qualitative Weight and Sum:using qualitative ratings (such as symbols) to identify performance in terms of essential, important and unimportant criteria
  • Rubrics: using a descriptive scale for rating performance that incorporates performance across a number of criteria
  • Value for Money: a term used in different ways, including as a synonym for cost-effectiveness, and as systematic approach to considering these issues throughout planning and implementation, not only in evaluation.

Approaches:

Social Return on Investment:

]]>
Synthesizing Data Across Evaluations https://coe-nepal.org.np/repository/synthesizing-data-across-evaluations/ Mon, 09 Oct 2017 07:34:46 +0000 http://repository.coe-nepal.org.np/?p=219 […]]]>

These options answer questions about a type of intervention rather than about a single case – questions such as “Do these types of interventions work?” or “For whom, in what ways and under what circumstances do they work?” The task involves locating the evidence (often involving bibliographic searches of databases, with particular emphasis on finding unpublished studies), assessing its quality and relevance in order to decide whether or not to include it, extracting the relevant information, and synthesizing it.  Different options use different strategies and have different definitions of what constitutes credible evidence.

Options

  • Best evidence synthesis: a synthesis that, like a realist synthesis, draws on a wide range of evidence (including single case studies) and explores the impact of context, and also builds in an iterative, participatory approach to building and using a knowledge base.
  • Lessons learnt:Lessons learnt can develop out of the evaluation process as evaluators reflect on their experiences in undertaking the evaluation.
  • Meta-analysis:  a statistical method for combining numeric evidence from experimental (and sometimes quasi-experimental studies) to produce a weighted average effect size.
  • Meta-ethnography: a method for combining data from qualitative evaluation and research, especially ethnographic data, by translating concepts and metaphors across studies.
  • Rapid evidence assessment:a process that is faster and less rigorous than a full systematic review but more rigorous than ad hoc searching, it uses a combination of key informant interviews and targeted literature searches to produce a report in a few days or a few weeks.
  • Realist synthesis: synthesizing all relevant existing research in order to make evidence-based policy recommendations.
  • Systematic review: a synthesis that takes a systematic approach to searching, assessing, extracting and synthesizing evidence from multiple studies.  Meta-analysis, meta-ethnography and realist synthesis are different types of systematic review.
  • Textual narrative synthesis:dividing the studies into relatively homogenous groups, reporting study characteristics within each group, and articulating broader similarities and differences among the groups.
  • Vote counting: comparing the number of positive studies (studies showing benefit) with the number of negative studies (studies showing harm).

Resources

  • Campbell Collaboration
  • Evidence for Policy and Practice Information Centre (EPPI-Centre),University of London
  • Presentationsfrom the 3IE Dha
]]>
Generalise Findings https://coe-nepal.org.np/repository/generalise-findings/ Mon, 09 Oct 2017 07:34:27 +0000 http://repository.coe-nepal.org.np/?p=217 […]]]>

An evaluation usually involves some level of generalising of the findings to other times, places or groups of people.

For many evaluations, this simply involves generalising from data about the current situation or the recent past to the future.

For example, an evaluation might report that a practice or program has been working well (finding), therefore it is likely to work well in the future (generalisation), and therefore we should continue to do it (recommendation). In this case, it is important to understand whether or not future times are likely to be similar to the time period of the evaluation.  If the program had been successful because of support from another organisation, and this support was not going to continue, then it would not be correct to assume that the program would continue to succeed in the future.

For some evaluations, there are other types of generalising needed.  Impact evaluations which aim to learn from the evaluation of a pilot to make recommendations about scaling up must be clear about the situations and people to whom results can be generalised.

There are often two levels of generalisation.  For example, an evaluation of a new nutrition program in Ghana collected data from a random sample of villages. This allowed statistical generalisation to the larger population of villages in Ghana.  In addition, because there was international interest in the nutrition program, many organisations, including governments in other countries, were interested to learn from the evaluation for possible implementation elsewhere.

Options

  • Analytical generalisation:making projections about the likely transferability of findings from an evaluation, based on a theoretical analysis of the factors producing outcomes and the effect of context. Realist evaluation can be particularly important for this.
  • Statistical generalisation:statistically calculating the likely parameters of a population using data from a random sample of that population.

Approaches

  • Horizontal Evaluation:An approach that combines self-assessment by local participants and external review by peers
  • Positive Deviance: Involves intended evaluation users in identifying ‘outliers’ – those with exceptionally good outcomes – and understanding how they have achieved these.
  • Realist Evaluation: Analyses the contexts within which causal mechanisms produce particular outcomes, making it easier to predict where results can be generalised.

 

]]>