frame – COE-Nepal https://coe-nepal.org.np/repository Online Repository Mon, 09 Oct 2017 07:26:05 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.15 https://coe-nepal.org.np/repository/wp-content/uploads/2017/09/coe-logo-150x150.png frame – COE-Nepal https://coe-nepal.org.np/repository 32 32 Frame https://coe-nepal.org.np/repository/frame/ Mon, 09 Oct 2017 07:26:05 +0000 http://repository.coe-nepal.org.np/?p=181 […]]]>

Framing an evaluation involves being clear about the boundaries of the evaluation. Why is the evaluation being done? What are the broad evaluation questions it is trying to answer? What are the values that will be used to make judgments about whether it is good or bad, better or worse than alternatives, or getting better or worse?

Tasks

  1. Identify primary intended users

Who will actually use the evaluation – not in vague, general terms (e.g. “decision makers”) but in terms of specific identifiable people (e.g. the manager and staff of the program; the steering committee; funders deciding whether to fund this program or similar programs in the future).

  1. Decide purposes and intended uses

Be clear about the intended uses of this evaluation – is it to support improvement, for accountability, for knowledge building? Is there a specific timeframe required (for example, to inform a specific decision or funding allocations?). if there are multiple purposes, how will you decide where to focus your resources?

  1. Specify the key evaluation questions

What are the broad evaluation questions you are seeking to answer? (These are different to the specific questions you might ask in an interview or a questionnaire)

  1. Determine what ‘success’ looks like

What are the values that will be used in the evaluation to make judgments about whether or not an intervention has been successful, or has improved, or is the best option? Different stakeholders might well have different values. How will these different values be identified and negotiated?

]]>
Identify primary intended users https://coe-nepal.org.np/repository/identify-primary-intended-users/ Mon, 09 Oct 2017 07:25:37 +0000 http://repository.coe-nepal.org.np/?p=179 […]]]>

It is important to identify the people who are intended to actually use the evaluation, and to engage them in the evaluation in some way if possible. This increases the likelihood that the evaluation will be done in ways that will be appropriate and that will actually be used.

Your primary intended users are not all those who have a stake in the evaluation, nor are they a general audience. They are the specific people, in a specific position, in a specific organization who will use the evaluation findings and who have the capacity to effect change (for example, change policies and procedures, improve management strategies). Who they are will depend on your evaluation.

Research into how evaluation findings are used shows the importance of the ‘personal factor’. The personal factor, a specific person or group of people who care about the evaluation findings, is the single most important predictor of evaluation finding use:

‘The personal factor is the presence of an identifiable individual or group of people who personally care about the evaluation and the findings it generates. Where such a person or group was present, evaluations were used; where the personal factor was absent, there was a correspondingly marked absence of evaluation impact.’

The tasks of identifying primary intended users and deciding the purposes of an evaluation are interconnected. You might begin by identifying the intended users, who will then decide the purpose of the evaluation. Or the purpose of an evaluation may have already been prescribed,which helps you to identify intended the users.

Resources

]]>
Decide Purpose https://coe-nepal.org.np/repository/decide-purpose/ Mon, 09 Oct 2017 07:25:20 +0000 http://repository.coe-nepal.org.np/?p=177 […]]]>

It is important that key stakeholders agree on the main purpose or purposes of evaluation, and be aware of any possible conflicts between purposes.

The purposes of an evaluation will inform (and be informed by) the evaluation timelines, resources, stakeholders involved and choice of evaluation options for describing implementation, context and impact.

It is not enough to state that an evaluation will be used for accountability or for learning.

Evaluations for accountability need to be clear about who will be held accountable to whom for what and through what means.  They need to be clear about whether accountability will be upwards (to funders and policymakers), downwards (to intended beneficiaries and communities) or horizontal (to colleagues and partners).

Evaluations for learning need to be clear about who will be learning about what and through what means. Will it be supporting ongoing learning for incremental improvements by service deliverers or learning about ‘what works’ or ‘what works for whom in what circumstances’ to inform future policy and investment?

It may be possible to address several purposes in a single evaluation design but often there needs to be a choice about where resources will be primarily focused,

Options

Using findings

Using process

Resources

 

]]>
Specify the Key Evaluation Questions https://coe-nepal.org.np/repository/specify-the-key-evaluation-questions/ Mon, 09 Oct 2017 07:25:02 +0000 http://repository.coe-nepal.org.np/?p=175 […]]]>

Key Evaluation Questions (KEQs) are the high-level questions that an evaluation is designed to answer – not specific questions that are asked in an interview or a questionnaire. Having an agreed set of Key Evaluation Questions (KEQs) makes it easier to decide what data to collect, how to analyze it, and how to report it.

KEQs usually need to be developed and agreed on at the beginning of evaluation planning – however sometimes KEQs are already prescribed by an evaluation system or a previously developed evaluation framework.

Try not to have too many Key Evaluation Questions – a maximum of 5-7 main questions will be sufficient. It might also be useful to have some more specific questions under the KEQs.

Key Evaluation Questions should be developed by considering  the type of evaluation being done, its intended users, its intended uses (purposes), and the evaluative criteria being used.  In particular, it can be helpful to imagine scenarios where the answers to the KEQs being used – to check the KEQs are likely to be relevant and useful and that they cover the range of issues that the evaluation is intended to address.  (This process can also help to review the types of data that might be feasible and credible to use to answer the KEQs).

The following information has been taken from the New South Wales Government, Department of Premier and Cabinet Evaluation Toolkit, which BetterEvaluation helped to develop.

Here are some typical key evaluation questions for the 3 main types of evaluation:

Key evaluation questions for the main types of evaluation 

Type Typical key evaluation questions
Process evaluation How is the program being implemented?
How appropriate are the processes compared with quality standards?
Is the program being implemented correctly?
Are participants being reached as intended?
How satisfied are program clients? For which clients?
What has been done in an innovative way?
Outcome evaluation (or impact evaluation) How well did the program work?
Did the program produce or contribute to the intended outcomes in the short, medium and long term?
For whom, in what ways and in what circumstances? What unintended outcomes (positive and negative) were produced?
To what extent can changes be attributed to the program?
What were the particular features of the program and context that made a difference?
What was the influence of other factors?
Economic evaluation (cost effectiveness analysis and cost-benefit analysis) What has been the ratio of costs to benefits?
What is the most cost-effective option?
Has the intervention been cost-effective (compared to alternatives)?
Is the program the best use of resources?

Appropriateness, effectiveness and efficiency

Three broad categories of key evaluation questions to assess whether the program is appropriate, effective and efficient are often used.

Organising key evaluation questions under these categories, allows an assessment of the degree to which a particular program in particular circumstances is appropriate, effective and efficient. Suitable questions under these categories will vary with the different types of evaluation (process, outcome or economic).

Typical key evaluation questions
Appropriateness To what extent does the program address an identified need?How well does the program align with government and agency priorities?Does the program represent a legitimate role for government?
 Effectiveness To what extent is the program achieving the intended outcomes, in the short, medium and long term?
To what extent is the program producing worthwhile results (outputs, outcomes) and/or meeting each of its objectives?
Efficiency Do the outcomes of the program represent value for money?
To what extent is the relationship between inputs and outputs timely, cost-effective and to expected standards?

Example

The Evaluation of the Stronger Families and Communities Strategy used clear Key Evaluation Questions to ensure a coherent evaluation despite the scale and diversity of what was being evaluated – an evaluation over 3 years, covering more than 600 different projects funded through 5 different funding initiatives, and producing 7 issues papers and 11 case study reports (including studies of particular funding initiatives) as well as ongoing progress reports and a final report.

The Key Evaluation Questions were developed through an extensive consultative process to develop the evaluation framework, which was done before advertising the contract to conduct the actual evaluation.

  1. How is the Strategy contributing to family and community strength in the short-term, medium-term, and longer-term?
  2. To what extent has the Strategy produced unintended outcomes (positive and negative)?
  3. What were the costs and benefits of the Strategy relative to similar national and international interventions? (Given data limitations, this was revised to ask the question in ‘broad, qualitative terms’
  4. What were the particular features of the Strategy that made a difference?
  5. What is helping or hindering the initiatives to achieve their objectives? What explains why some initiatives work? In particular, does the interaction between different initiatives contribute to achieving better outcomes?
  6. How does the Strategy contribute to the achievement of outcomes in conjunction with other initiatives, programs or services in the area?
  7. What else is helping or hindering the Strategy to achieve its objectives and outcomes? What works best for whom, why and when?
  8. How can the Strategy achieve better outcomes?

The KEQs were used to structure progress reports and the final report, providing a clear framework for bringing together diverse evidence and an emerging narrative about the findings.

 

]]>
Determine What ‘Success’ Looks Like https://coe-nepal.org.np/repository/determine-what-success-looks-like/ Mon, 09 Oct 2017 07:24:43 +0000 http://repository.coe-nepal.org.np/?p=173 […]]]>

Evaluation is essentially about values, asking questions such as : What is good, better, best?  Have things improved or got worse? How can they be improved? Therefore, it is important for evaluations to be systematic and transparent in the values that are used to decide criteria and standards.

Criteria

Criteria refer to the aspects of an intervention that are important to consider when deciding whether or not, and in what ways, it has been a success or a failure, or when producing an overall judgement of performance. There are different types of criteria:

Positive outcomes and impacts: for example, should childcare be judged in terms of its success in supporting early childhood development or in supporting parents to engage in education or work? If it is both, how should they be weighted?

Negative outcomes and impacts: for example, an infrastructure development might produce negative unintended effects (e.g. soil erosion caused by a new road) as well as positive intended effects)

Distribution of costs and benefits: for example, is it important for everyone to receive some benefit or the same benefit or for the intervention to be targeted so that the most disadvantaged receive more benefit?

Resources and timing: for example, is there a need for results to be achieved within a certain timeframe?

Processes: for example, use of recyclable materials; providing access to groups with restricted mobility

Standards

Standards refer to the levels of performance required for each of the criteria. For example, if a project aims to reduce maternal mortality, what level of performance is needed for it to be considered successful? Any reduction?  A reduction of at least xx%?  A reduction of at least xx in absolute terms? A reduction to a rate of x.x that matches other similar regions, or matches official targets?

Criteria and standards need to be agreed on in order to identify the data that need to be gathered for an evaluation.

In addition, these data need to be combined to form an  overall judgement of success or failure, or to rank alternatives against each other.  For example, if a road project achieves its economic objectives but produces environmental damage, should it be considered a success overall?  How much damage, and at whose cost, would be enough to outweigh the positive impacts?  These issues are addressed under the task Synthesise data from a single evaluation.

Options

Some options are used to identify possible criteria and standards that could be used in an evaluation, drawing on formal and informal sources, and some options are used to negotiate which should be used and how they should be weighed.

Formal statements of values

  • OECD-DACcriteria: setting out high level evaluation criteria for evaluations which must be operationalized for each evaluation (OECD’s Development Assistance Committee).
  • Millenium Development Goals(MDGs):  a set of time bound and quantified goals and targets developed to help track progress in eradicating poverty
  • Standards, evaluative criteria and benchmarks: developing explicit standards, evaluative criteria or benchmarks or using existing relevant standards, criteria or benchmarks to define values.
  • Stated goals and objectives(including legislative review and policy statements):  stating the program’s  objectives and goals so they can be used to assess program success.

Articulate and document tacit values

  • Hierarchical Card Sorting(HCS): a participatory card sorting option designed to provide insight into how people categorize and rank different phenomena.
  • Open space technology: facilitating a group of 5 – 500 people in which a central purpose, issue, or task is addressed without a formal initial agenda.
  • Photovoice: using cameras to allow participants (often intended beneficiaries) to take and share photos in order to describe how they relate to important issues for them.
  • Rich Pictures: exploring, acknowledging and defining a situation through diagrams in order to create a preliminary mental model.
  • Stories of change:showing what is valued through the use of specific narratives of events.
  • Values Clarification Interviews:interviewing key informants and intended beneficiaries to identify what they value.
  • Values clarification public opinion questionnaires:seeking feedback from feedback from large numbers of people about their priorities through the use of questionnaires.

Negotiate between different values

  • Concept Mapping: negotiating values in order to frame the evaluation.
  • Delphi Study: generating a consensus without face to face contact by soliciting opinions from individuals in an iterative process of answering questions.
  • Dotmocracy: recording participants opinions by using sticky dots to either record agreement or disagreement with written statements.
  • Open Space Technology: facilitating a group of 5 – 500 people in which a central purpose, issue, or task is addressed without a formal initial agenda.
  • Public Consultations: conducting public meetings to provide an opportunity for the community to raise issues of concern and respond to options.

Approaches

  • Critical System Heuristics:an approach used to surface, elaborate, and critically consider boundary judgments, that is, the ways in which people/groups decide what is relevant to the system of interest (any situation of concern).
  • Participatory evaluation:involving key stakeholders in the evaluation process.

 

]]>