themes – COE-Nepal https://coe-nepal.org.np/repository Online Repository Fri, 15 Sep 2017 10:44:52 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.14 https://coe-nepal.org.np/repository/wp-content/uploads/2017/09/coe-logo-150x150.png themes – COE-Nepal https://coe-nepal.org.np/repository 32 32 Themes https://coe-nepal.org.np/repository/themes/ Fri, 15 Sep 2017 10:17:56 +0000 http://repository.quackfoot.com/?p=110 […]]]>

While the Rainbow Framework can be applied to all kinds of evaluations, there are particular issues in evaluating different types of interventions and in doing different types of evaluations.

Thematic pages bring together resources that relate to a particular theme, including examples, guides and specialist communities of practice

Thematic pages can cover:

  • evaluations in particular sectors (for example, agricultural research; peacekeeping;water, sanitation and hygiene (WASH)
  • evaluations of particular types of interventions (for example, capacity development, policy advocacy),
  • particular types of evaluation (for example, impact evaluation, needs analysis)
  • cross-cutting issues in evaluation (for example, gender-sensitive evaluation).

If you’d like to suggest adding other themes, please contact us.

List of Themes

Sector​

Types of intervention

Types of evaluation

Cross-cutting themes

Source:
Themes. (n.d.). Retrieved January 26, 2017, from http://betterevaluation.org/en/themes_overview

]]>
Climate Change Adaptation and Mitigation https://coe-nepal.org.np/repository/climate-change-adaptation-and-mitigation/ Fri, 15 Sep 2017 10:17:25 +0000 http://repository.quackfoot.com/?p=108 […]]]>

Climate Change Adaptation and Mitigation projects and programs present particular challenges for evaluation. For specific information about these, check the resources identified below.

Resources

SEA Change

A SouthEast Asia Community of Practice for Monitoring and Evaluation of Climate Change Interventions

http://www.seachangecop.org


Climate-Eval

An international community of practice on climate change evaluation.

http://www.climate-eval.org/


Global Environment Facility

Evaluation office

http://www.thegef.org/gef/eo_office
Source:
Climate Change Adaptation and Mitigation. (n.d.). Retrieved January 26, 2017, from http://betterevaluation.org/en/themes/climate_change

]]>
Complexity https://coe-nepal.org.np/repository/complexity/ Fri, 15 Sep 2017 10:16:56 +0000 http://repository.quackfoot.com/?p=106 […]]]>

An issue of increasing interest in evaluation, especially development evaluation, is whether and how we might apply ideas and methods from complexity science to evaluation.

Complexity ideas and methods have important applications for how we think about programs and policies, how we collect and analyse data, and how we report findings and support their use.

In 2014 this is one of the priority themes that BetterEvaluation will be focusing on, with events, activities and resources planned throughout the year. We’ll be updating this page throughout the year.  You can also follow this discussion, and add to it, on Twitter using the hash tags #complexity #eval.

Complexity is sometimes dismissed as a ‘trendy’ term that is used to avoid accountability and planning.  But there are two important ideas that it raises:

  • Multiple components (sometimes labelled ‘complicated’
  • Emergence

Interventions can have some simple aspects, some complicated aspects and some complex aspects, and it is more useful to identify these than to classify a whole intervention as complex.

Multiple components

Many evaluations have to deal with programs with multiple components, multiple levels of implementation, multiple implementing agencies with multiple agendas, and long causal chains with many intermediate outcomes, or outcomes that can only be achieved through a ‘causal package’ involving multiple interventions or favourable contexts.

In these situations, evaluations have to be based on a logic model and data collection and analysis plan that provides information about the different components which all need to work effectively and together, or processes that work differently in different contexts, or which only work in combination with other programs or favourable environments. It is essential to report on these in terms of ‘what works for whom in what contexts’.

In some frameworks (especially two classic papers by Glouberman and Zimmerman (2002)  (see http://albordedelcaos.com/2011/09/27/conociendo-el-blog/   for a Spanish language version of these ideas)and Kurtz and Snowden (2003) , this aspect is referred to as “complicated’ to distinguish it from emergence.

Emergence

Many evaluations have to deal with programs that involve emergent and responsive strategies and causal processes which cannot be completely controlled or predicted in advance. While there is an overall goal in mind, the details of the program will unfold and change over time as different people become engaged and as it responds to new challenges and opportunities.Projects that focus on community development or leadership development are particularly likely to have these features.

In these situations, evaluations have to be able to identify and document emergent partners, strategies and outcomes, rather than only paying attention to the objectives and targets identified at the beginning. Real-time evaluation will be needed to answer the question “What is working?” and to inform ongoing adaptation and learning. Effective evaluation will not involve building a detailed model of how the intervention works and calculating the optimal mix of implementation activities – because what is needed, what is possible, and what will be optimal will be always changing.

Resources

Exploring the science of complexity: Ideas and implications for development and humanitarian efforts

The paper details each of the 10 concepts of complexity science, using real world examples where possible. It then examines the implications of each concept for those working in the aid world. Here, we list the 10 concepts for reference, using the next section of this summary to suggest some overall implications of using the concepts for work in international development and humanitarian spheres.

View resource

Discussion Note: Complexity Aware Monitoring

USAID’s Office of Learning, Evaluation and Research (LER) has produced a Discussion Note: Complexity-Aware Monitoring, intended for those seeking cutting-edge solutions to monitoring complex aspects of strategies and projects.

View resource

Complex adaptive systems: A different way of thin

king about ​Health care systems

Looking at how complexity science could be used in health systems which are characterised by nonlinear dynamics and emergent properties arising from diverse populations of individuals interacting with each other and which are capable of undergoing spontaneous self-organisation..

View resource

 

Source:
Complexity. (n.d.). Retrieved January 26, 2017, from http://betterevaluation.org/en/themes/complexity

]]>
Evaluability Assessment https://coe-nepal.org.np/repository/evaluability-assessment/ Fri, 15 Sep 2017 10:16:16 +0000 http://repository.quackfoot.com/?p=104 […]]]>

An assessment of the extent to which an intervention can be evaluated in a reliable and credible fashion.

Contents

1 What is evaluability?

2 Where is it used?

3 What do Evaluability Assessments examine?

4 How do you do an Evaluability Assessment?

5 How much time and money is involved?

6 When would you not do an Evaluability Assessment?

7 What are the alternatives to an Evaluability Assessment?

An example

Some advice

Some tools

References

Resources

This overview is based on a literature review of Evaluability Assessment commissioned by the UK Department of International Development​ (DFID) in 2012 and published as DFID Working Paper (Davies 2013). The review identified 133 documents including journal articles, books, reports and web pages, published from 1979 onwards. Approximately half of the documents were produced by international development agencies; most of the remaining documents covered American domestic agency experience with Evaluability Assessments (the latter has been more recently summarised by Trevisan and Walser, 2014).

1 What is evaluability?

Amongst international development agencies there appears to be widespread agreement on the meaning of the term “evaluability”. The following definition from the Organisation for Economic Co-operation and Development-Development Assistance Committee (OECD-DAC) is widely quoted and used:

“The extent to which an activity or project can be evaluated in a reliable and credible fashion” (OECD-DAC 2010; p.21)

2 Where is it used?

Evaluability Assessments have been used since the 1970s, initially by government agencies in the United States, and subsequently by a wider range of domestic organisations. International development agencies have been using Evaluability Assessments since 2000. Although the most common focus of an Evaluability Assessment is a single project, Evaluability Assessments have also been carried out on sets of projects, policy areas, country strategies, strategic plans, work plans, and partnerships.

3 What do Evaluability Assessments examine?

The DFID Working Paper (Davies 2013) on Evaluability Assessment identified these dimensions of evaluability:

  • Evaluability “ in principle”, given the nature of the project theory of change
  • Evaluability “in practice”, given the availability of relevant data and the capacity of management systems able to provide it.
  • The utility and practicality of an evaluation, given the views and availability of relevant stakeholders

The overall purpose of an Evaluability Assessment is to inform the timing of an evaluation and to improve the prospects for an evaluation producing useful results. However, the focus and results of an Evaluability Assessment will depend on its timing, as shown below. Early assessments may have wider effects on long term evaluability but later assessments may provide the most up to date assessment of evaluability.

Project stage Evaluability Assessment focus Evaluability Assessment results
Design Theory of change (ToC) Improved project design
Inception ToC & Data availability Improved M&E framework
Implementation ToC & Data availability & Stakeholders Improved evaluation terms of reference (ToRs)

4 How do you do an Evaluability Assessment?

Two forms of advice are commonly provided. The first is about sequencing of activities, given in the form of various stage models. The second is about the contents of inquiries, often structured in the form of checklists.

Stage models include largely predictable (but often iterated) steps involving planning, consultation, data gathering, analysis, report writing and dissemination.  Two of these are worth commenting on here:

The first relates to the planning stage. An important early step in an Evaluability Assessment is the reaching of an agreement on the boundaries of the task, which has two aspects:

  • The extent to which the Evaluability Assessment should proceed from a diagnosis of evaluability on to a prescription and then implementation of changes that are needed to address evaluability problems. For example, revision of a theory of change or development of an M&E framework.
  • The range of project documents and stakeholders that need to be identified and then examined and interviewed respectively. These choices have direct consequences for the scale and duration of the work that needs to be done.

The second relates to the analysis stage, where two tasks can be identified:

  • At the base is the synthesis of answers from multiple documents and interviews in respect to a specific checklist question. Here, the assessment needs to: (a) focus on validity and reliability of the data; and then, (b) the identification of the consensus and outlier views.
  • At the next level is the synthesis of answers across multiple questions within a given evaluability dimension. Here, the assessment needs to: (a) identify any existence of any “obstacle” problems that must be removed before any other progress can be made; and then, (b) asesses the relative importance of all other problems.

Checklists are used by many international agencies, with varying degrees of rigor and flexibility. At best, their use provides an accountable means of ensuring systematic coverage of all relevant issues. The DFID Working Paper synthesised the checklists used by 11 different agencies into a set of three checklists that cover the dimensions of evaluability listed above. These can provide a useful “starter pack” which can be adapted according to circumstances.  If an aggregate score on evaluability (or on multiple aspects of evaluability) needs to be calculated, then explicit attention needs to be given to the weighting given to each item on a checklist.  It is unlikely that all items will be of equal importance.

5 How much time and money is involved?

The time required to complete an Evaluability Assessment can range from a few days to a month or more. A key determinant is the extent to which stakeholder consultations are required and whether multiple projects are involved. Evaluability Assessments at the design stage may be carried out largely on the basis of desk-based work, whereas Evaluability Assessments prior to a proposed evaluation is much more likely to require extensive stakeholder consultation.

It is the relationship between the cost of an Evaluability Assessment and the cost of an evaluation that is important, rather than its absolute cost. When the proportionate cost of an Evaluability Assessment is high then, correspondingly, large improvements in evaluation results will be needed to justify those costs.

6 When would you not do an Evaluability Assessment?

Some project designs are manifestly unevaluable and some M&E frameworks are manifestly inadequate at first glance. In these circumstances, an Evaluability Assessment would not be needed to make a decision about whether to go ahead with an evaluation. Efforts need to focus on the more immediate tasks of improving project design and/or the M&E framework.

In other circumstances, the cost of a proposed evaluation may be quite small, and thus, the cost- effectiveness of making an additional investment in an Evaluability Assessment may be questionable. On the other hand, with large projects, even those that appear relatively evaluable, investment in an Evaluability Assessment could still deliver cost-effective changes.

7 What are the alternatives to an Evaluability Assessment?

At the design and approval stages of a project, the associated quality assurance processes can include evaluability-oriented questions. The process of Evaluability Assessment can in effect be institutionalised within existing systems rather than contracted as a special event.

At the inception stage, some organisations may routinely commission the development of an M&E framework which should intrinsically address evaluability questions. Or, they may have established procedures for reviewing the M&E system which are more purpose-specific than a generic Evaluability Assessment tool of the kind provided by the DFID working paper.

Prior to a proposed evaluation, some organisations may commission preparatory work that takes on a wider ambit than an Evaluability Assessment. Approach Papers may cover issues listed in Evaluability Assessment checklists but also scan a much wider literature for evidence for and against the relevance and effectiveness of the type(s) of interventions being evaluated.

An example

In 2000, ITAD, a UK consultancy firm, carried out an Evaluability Assessment of 28 human rights and governance projects, funded by the Swedish International Development Cooperation (Sida) in four countries in Africa and Latin America (Poate et al. 2000). This assessment is impressive in a number of respects. Analysis was done with the aid of a structured checklist that helped minimise divergences of treatment by the consultants who worked on the study. Nineteen evaluation criteria were investigated by means of subsidiary questions, and a score given for each criterion. The most common evaluability problems that were found  related to unavailability of data, followed by issues of project design including insufficient clarity of purpose and the difficulties of causal attribution. Nevertheless, the authors were able to spell out a range of evaluation options that could be explored, along with the type of capacity building work needed to address the identified issues. Their report includes a full data set of checklist ratings of all projects on all criteria, thus, enabling others to do further analysis of this experience with other research or evaluation purposes in mind.

Some advice

The design of checklist can be usefully informed by theory and not just ad hoc or experience-based conjecture.  Sources can include relevant evaluation standards, codes of ethics and syntheses of studies of evaluation use.

Checklists weightings have been used by a number of agencies. Because of the diversity of possible approaches to evaluation and specific evaluation contexts, it is hard to justify any universally applicable set of weightings for a given checklist. However, weightings can be assigned “after the fact”(i.e., after a specific Evaluability Assessment has been carried out for a particular project in a given context). Like all good weightings, their use needs to be accompanied by text explanations.

Some tools

The attached tools can be further adapted to specific needs/contexts.  Please feel free to share your experience with Evaluability Assessment in the comments to this page or recommend additional resources.

References

Davies R (2013). Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations. DFID Working Paper 40. Available at:https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/248656/wp40-planning-eval-assessments.pdf

OECD-DAC (2010). Glossary of key terms in evaluation and results based management. Paris: OECD-DAC. Available at:http://www.oecd.org/development/peer-reviews/2754804.pdf

Poate D, Riddell R, Curran T, Chapman N (2000). The Evaluability of Democracy and Human Rights Projects Volume 1 & 2. Available at:www.oecd.org/derec/sweden/46223163.pdf

Trevisan M, Walser T (2014). Evaluability Assessment: Improving Evaluation Quality and Use. SAGE Publications. See:http://www.uk.sagepub.com/textbooks/Book240728

Resources

Literature

Planning Evaluability Assessments: A Synthesis of the Literature with Recommendations. (Davies R, 2013)

This report presents a synthesis of the literature on Evaluability Assessments up to 2012. The main focus of the synthesis is on the experience of international agencies and on recommendations relevant to their field of work. The synthesis provides recommendations about the use of evaluability assessments.

A bibliography on evaluability assessment. (Davies R, 2012).

This bibliography includes links to a range of literature related to evaluability assessment.

Evaluability Assessment: Improving Evaluation Quality and Use. (Trevisan M, Walser T, 2014), Sage Publications.

This book summarises a wealth of American domestic agency experience. Stages of an Evaluability Assessment process are described by individual chapters, each of which includes a checklist of issues to examine, along with case examples.

Guides

Evaluability Assessment: Examining the Readiness of a Program for Evaluation:This guide from the Juvenile Justice Evaluation Center is aimed at providing juvenile justice program managers with a guide to implementing evaluability assessment in order to ensure that programs are ready for evaluation.

Guidance Note on Carrying Out an Evaluability Assessment: This guide from the United Nations Development Fund for Women (UNIFEM) was developed to ensure program managers understand the key concepts behind evaluability assessment.

Evaluability Assessments and Choice of Evaluation Methods: This webinar highlights the importance of evaluability assessments for development projects, as well as discussing the suitability of various evaluation methods that are available to a manager.

Tools

Evaluability Assessment Template: This template from the United Nations Office on Drugs and Crime (UNODC) is designed to take the user through a step by step process of evaluability assessment.

Source:
Evaluability Assessment. (n.d.). Retrieved January 26, 2017, from http://betterevaluation.org/en/themes/evaluability_assessment

]]>
Feminist Evaluation https://coe-nepal.org.np/repository/feminist-evaluation/ Fri, 15 Sep 2017 10:14:39 +0000 http://repository.quackfoot.com/?p=102 […]]]>

Feminist evaluation (FE) emphasizes participatory, empowering, and social justice agendas. While all evaluation approaches are laden with their own, often implicit, values, few assert their values as openly as feminist evaluation. Unlike most gender approaches, feminist evaluation does not provide a framework or advocate a precise approach; rather, feminist evaluation is often defined as a way of thinking about evaluation. (See, for example, Podems, 2014; Podems 2010; Beardsley & Hughes Miller, 2002; Hirsch & Keller, 1990; Hughes, 2002; McRobbie, 1982).

Feminist evaluation has a strong overlap with some of the key characteristics of other evaluation and research approaches (see  figure below); if you draw upon or appreciate these other approaches, then a feminist evaluation approach might be something that adds value to your practice.

What are the basic concepts that underpin feminist evaluation?

Feminist evaluation is based on feminist research, which in turn is based on feminist theory.  Feminist evaluation theorists often list six basic tenets as the fundamental elements of a feminist evaluation:

  • Feminist evaluation has as a central focus the gender inequities that lead to social injustice.
  • Discrimination or inequality based on gender is systemic and structural.
  • Evaluation is a political activity; the contexts in which evaluation operates are politicized; and the personal experiences, perspectives, and characteristics evaluators bring to evaluations (and with which we interact) lead to a particular political stance. A feminist evaluation encourages an evaluator to view her- or himself as an activist.
  • Knowledge is a powerful resource that serves an explicit or implicit purpose.
  • Knowledge should be a resource of and for the people who create, hold, and share it. Consequently, the evaluation or research process can lead to significant negative or positive effects on the people involved in the evaluation/research. Knowledge and values are culturally, socially, and temporally contingent. Knowledge is also filtered through the knower.
  • There are multiple ways of knowing; some ways are privileged over others.

(Sielbeck-Bowen et al. 2002: pp. 3–4)

FE is particularly well suited to understanding inequities and encourages evaluators to use their empirical findings to advocate for social change:

  • FE questions what it means to do research, to question authority, to examine gender issues, to examine the lives of women, and to promote social change.
  • FE has as a central focus the gender inequities that lead to social injustice.
  • FE views participation as a political activity and views knowledge and participation in discourse as a form of power.
  • FE seeks also to ensure that the narratives and experiences of women in evaluations are valued equally to that of men’s and does not treat women as a homogenous group.

(Sielbeck-Bowen et al. 2002)

What’s the difference between gender approaches and feminist evaluation?

Feminist theorists used the terms ‘sex’ to describe anatomical differences between females and males and ‘gender’ to refer to socially constructed relationships between women and men (Podems 2010).  Fletcher (2015) refers to gender as “a process of judgement and value (a social hierarchy), related to stereotypes and norms about masculinity or femininity, regardless of your born sex category. It is intimately entwined with sexuality and works alongside other social hierarchies, which most commonly form around race/ethnicity and class/caste/socio-economic status. In some countries and cultures, other hierarchies—such as those related to age or religious beliefs—are also important.” (see BetterEvaluation Blog: Gender injustice and inequality: what helps in assessing impact? and Fletcher 2015).

In a brief historical overview ‘gender approaches’, Podems (2010) refers to:

  • interventions that took a welfare approach (e.g., handouts and services) to helping women in the developing world without challenging women’s status or the prevailing patriarchal structures (starting in the 1950s and 1960s but fashionable well into the 1990s)
  • women in development (WID) approaches that focused on making women more efficient in what they were doing so as to alleviate poverty (starting in the 1970s)
  • women and development (WAD) approaches that focused on improving the macro context (i.e., economic, political and social structures of developing nations) in the assumption that this would benefit women (starting in the 1970s)
  • gender and development (GAD) approaches that focus on the interconnections of gender, class, and race and the social construction of their defining characteristics (starting in the 1980s)

While acknowledging that some gender approaches do incorporate one or more feminist elements, key differences between feminist evaluation and gender approaches may be summed up as follows:

Gender Approaches Feminist Approaches
Identify the differences between women and men in different ways. Explore why differences between women and men exist.
Do not challenge women’s position in society, but rather map it, document and record it. Challenge women’s subordinate position; empirical results aim to strategically affect women’s lives, as well as the lives of marginalized persons.
View women as a homogenous group, without distinguishing other factors such as race, income level, marriage status, or other factors that make a difference. Acknowledge and value differences; do not consider women as a homogenous category.
Assume that equality of women and men is the end goal and design and value evaluations with this understanding. Acknowledge that women may not want the same things as men and design and value evaluations accordingly.
Do not encourage an evaluator to reflect on her/his values or how their vision of the world influences their design and its findings Emphasize that an evaluator needs to be reflexive and open, and recognize overtly that evaluations are not value free.
Interpret gender as “men” and “women”. Recognise other gender identities in addition to male and female
Collect gender-sensitive data When collecting data, value different ways of knowing, seek to hear and represent different voices and provides a space for women or disempowered groups within the same contexts to be heard.

Advice for using feminist evaluation

  • You don’t need to be a feminist to use feminist evaluation. While there are different schools of thought, feminist evaluation should not be exclusively for those that identify as feminists. The belief that only feminists conduct feminist evaluation keeps the approach out of mainstream evaluation and prevents non-feminists from exploring its potential use in their own evaluation activities. Choosing a feminist evaluation approach, like choosing any evaluation approach in any part of the world, needs to be done with careful consideration of multiple factors. Feminist evaluation should be applied based on its cultural, social, and technical appropriateness to a given context and should lead to a feasible, useful, appropriate, and credible evaluation (Podems, 2014).
  • Be knowledgeable about what feminist evaluation is, and is not. Many people have a strong reaction to feminist evaluation and yet few can explain what the approach entails. If appropriate, engage potential users of the evaluation in a discussion around how elements of the approach (or all of it) could enable a credible and more useful evaluation within the particular context it going to be used.
  • Consider removing the label while sticking to using the approach. Having two words (‘feminist’ and ‘evaluation’) that often elicit strong reactions together in one approach can be a challenge. If you believe that elements or the entirety of FE are appropriate to the evaluation process, explicitly introduce each element that you will use, or clearly explain the approach in its entirety, and provide the reasons for choosing it.
  • Adapt as needed. Feminist evaluation can provide a useful complement to, and intermingle with, other evaluation approaches such as democratic evaluation, empowerment evaluation, transformative evaluation, and others. Consider which elements of FE would fill important gaps or help to emphasise important ways of working or diversify results.
  • Get involved and take it one step further. Peer support can be invaluable when practising evaluation. This is particularly important for feminist evaluation which is currently not widely practised.

Examples of feminist evaluations

Europe:

Evaluating gender structural change: the experience of the GENOVATE project’s evaluation

María Bustelo, Julia Espinosa and María Velasco, Complutense University of Madrid

The Transforming Organisational Culture for Gender Equality in Research and Innovation (GENOVATE) project is a European action-research project implemented by seven European universities with the main goal to promote gender structural change.  Complutense University of Madrid evaluates the GENOVATE project and trains and supports partners to evaluate their own GEAPS. The evaluation is formative, continuous, and carried out in collaboration with its seven partner teams. The evaluation’s goal is to contribute to learning about gender change and how to promote more gender-transformative actions.

View presentation slides from the 11th EES Biennial Conference presentation: Evaluation and organizational change pro-gender equality: the experience of evaluating the GENOVATE project (Julia Espinosa)

Asia:

Evaluation of the Action for Equality Program. Equal Community Foundation India

Zaveri, Sonal. Independent Consultant, Secretary, Community of Evaluators South Asia; International Adviser, Child-to-Child Trust, UK; Member, EvalGender+ Management Group

Working with men and boys is increasingly recognised as an important approach to address and prevent gender based violence and discrimination. Equal Community Foundation (ECF) has worked with more than 3,000 adolescent boys across 28 low-income urban communities in Pune and Mumbai, India. The purpose of the evaluation was to assess the outcomes of ECF’s community programmes in order to inform the development of a strategy for future work in order to improve, replicate it and scale up the approach. A gender transformative or feminist lens along with utilization-focused and developmental approaches was used to evaluate the program and assess to what extent deeply entrenched attitudes towards male preference and female subordination had changed, what factors supported the change, who were the gatekeepers and what was the impact of these changes on boys and their families in communities.

North and Central America:

Explication of Evaluator Values: Framing Matters

Kathryn Sielbeck-Mathes, PhD; Rebecca Selove, PhD

Dr. Sielbeck-Mathes uses evaluation of three rural co-occurring mental health and substance abuse treatment programs to describe evaluation processes of (1) identifying and articulating feminist values internally, (2) thinking through how to frame important messages about trauma, substance abuse, and treatment for women so they are communicated in a language that is translatable and transferable, and (3) designing and responding to the analysis process and findings so as to improve outcomes immediately for women and in future programs. Dr. Sielbeck-Mathes emphasizes that in order to use the evaluation process to bring about social change, feminist values and evaluation findings should be translated into a meaningful, compelling, and actionable language.

See Seilbeck-Mathes and Selove (2014), and read their blog on this topic on AEA365: FIE TIG Week: Kathryn Sielbeck-Mathes and Rebecca Selove on Feminist Evaluation and Framing 

Integrating Gender into Canada’s Federal Government Evaluation Function: The Policy Dimension

Jane Whynot, University of Ottawa

The Canadian federal government’s evaluation practices are largely dictated by a centralized evaluation policy affecting individual departments and agencies. These organizations are held accountable to standards and directives set forth in the suite of evaluation policy tools. Applying feminist evaluation principles to the suite of evaluation policy tools provides an interesting perspective to identify and strengthen opportunities to incorporate gender and other intersections of diversity. Early research highlights conflicting priorities between the suite of policy tools and feminist evaluation principles. Presuming these tools remain static, this paper offers suggestions for others creating opportunities to incorporate gender, and other elements of diversity in their own respective evaluation functions.

Transforming a Gender-Neutral Evaluation into a Feminist Evaluation

Fabiola Amariles, Founder and Director of Learning for Impact Consulting, Board Member of Latin American Network of Women in Management, Member of the Management Group of EvalGender+

Silvia Salinas Mulder, Independent Evaluation Consultant, Independent consultant and evaluator, President of the Bolivian Monitoring and Evaluation Network (REDMEBOL), Creator and co-manager of the EvalGénero, the Spanish speaking community of practice on gender and evaluation, Founding member of the Latin America and Caribbean Newtork of Women in Management (REDWIM)

This final evaluation was an experience of setting a strategy that was key in gaining trust and credibility about analyzing gender issues in a “neutral” evaluation process. The evaluation team was proactive and used negotiation skills to reach agreements and consensus with project managers and key stakeholders to achieve two objectives: linking results and recommendations to sustainable gender equity and equality; and sensitizing program staff and other actors about the importance of addressing gender equity and equality issues in order to advance social change and development. The case illustrates how, from the perspective of a Latin America reality and evaluation practice, the central political objectives of a feminist evaluation were accomplished. Based on evidence, the evaluation process challenged the paradigms of intracommunity homogeneity and equality.

Example cited in Chapter 9, Salinas-Mulder, S. & Amariles, F. ( 2014). Latin American Feminist Perspectives on Gender Power Issues in Evaluation.of the Book “Feminist Evaluation and Research. Theory and Practice”, edited by Sharon Brisolara et al., 2014.

Africa: Using Feminist Evaluation in a Non Feminist Setting

Donna Podems, PhD, University of Johannesburg and Director of OtherWISE

In the early 1990s, in Botswana, a nonprofit organization (NPO) established itself on the physical grounds of a rather underfunded government mental institution. The aim of the evaluation was to provide the NPO with data that they could use to improve their program, demonstrate successes, and engage the hospital management in supporting or, at the very least, not preventing the NPO from doing their work. The evaluation was strongly influenced by feminist evaluation, informed by other approaches and resulted in credible and useful findings. This small, low budget, quiet, evaluation made a difference in the lives of the invisible (the patients), the undervalued (the nursing staff) and the unrecognized (NPO).

Resources

Data sets

Gender Statistics Database: The Gender Statistics Database contains gender statistics from all over the European Union (EU) and beyond, at the EU, Member State and European level.

DHS Gender Corner: The DHS Gender Corner provides quantitative information on such topics as domestic violence, women’s status and female genital cutting, and links to gender-related publications based in DHS data.

Discussion Papers

Ohio Women’s Centers’ Reflections on Evaluation & Assessment: The second of the Ohio Women’s Centres Issues Brief, paper presents reflections from the Ohio Women’s Centers on evaluation and its role in their work and issues related to its accomplishment.

Capturing changes in women’s lives: the experiences of Oxfam Canada in applying feminist evaluation principles to monitoring and evaluation practice: This article describes Oxfam Canada’s efforts to develop a mixed-methods approach to monitoring, evaluation, and learning rooted in feminist evaluation principles.

Feminist Evaluation and Gender Approaches: There’s a Difference?: This article provides readers with a historical overview and description of feminist evaluation and gender approaches.

Books

Feminist Evaluation and Research: Theory and Practice: This book provides an overview of feminist theory and research strategies as well as detailed discussions of how to use a feminist lens, practical steps and challenges in implementation, and what feminist methods contribute to research and evaluation projects.

Websites and Networks

The My M&E website: The My M&E Website is both a source of knowledge about monitoring and evaluation practices and a network to connect practitioners from around the world. My M&E provides free e-learning opportunities related to gender and evaluation and feminist evaluation.

Gender and Evaluation International Online Community of Practice: The Gender and Evaluation Community’s objective is to bring knowledge building and knowledge sharing under one place, and to share the content and experiences from people involved in the network.

Feminist Issues in Evaluation – AEA Thematic Interest Group: This American Evaluation Association Thematic Interest Group contributes to the annual AEA conference through the sponsoring of sessions and professional development workshops, and serves as a network for members and others interested in feminist issues in evaluation.

References

Beardsley, R. and Hughes Miller, M. (2002). ‘Revisioning the process: A case study in feminist program evaluation’, New Directions for Evaluation. 96:57-70.

Brisolara, S., Seigart, D. and SenGupta, S. (Eds) (2014) Feminist Evaluation and Research: Theory and Practice. The Guilford Press.

Fletcher, G. (2015). Addressing gender in impact evaluation. A Methods Lab Publication. London: Overseas Development Institute & Melbourne: BetterEvaluation.

Hirsch, M. and Keller, E. (1990). ‘Conclusion: Practicing conflict in feminist theory’. In: Hirsch M, Keller E (eds). Conflicts in feminism. p370-385. New York: Routledge.

Hughes, C. (2002). Key concepts in feminist theory and research. London: Sage Publications.

McRobbie A (1982). The politics of feminist research: Between talk, text and action. Feminist Review 12:46-48.

Podems, D. R. (2014). ‘Feminist Evaluation for Nonfeminists’ in Feminist Evaluation and Research: Theory and Practice. Edited by Sharon Brisolara, Denise Seigart, and Saumitra SenGupta. Guilford Press: New York.

Podems, D. (2011). ‘Feminist evaluation and gender approaches: There’s a difference?’, Journal of Multidisciplinary Evaluation, 6(14): 1-17.

Sielbeck-Bowen, K., Brisolara, S., Siegart, D., Tischler, C., and Whitmore, E. (2002). ‘Exploring feminist evaluation: The ground from which we rise’, New Directions for Evaluation 96: 3-8. http://onlinelibrary.wiley.com/doi/10.1002/ev.62/abstract

Sielbeck-Mathes, K. and Selove, R. (2014) ‘An Explication of Evaluator Values: Framing Matters’. In S. Brisolara, D. Seigart, and S. SenGupta (Eds) Feminist Evaluation and Research: Theory and Practice pp. 143-150.

Whynot, J. (2015) Integrating Gender into the Canadian Federal Government Evaluation Function. Presentation at The Evaluation Conclave 2015.

Source:
Feminist evaluation. (n.d.). Retrieved January 26, 2017, from http://betterevaluation.org/en/themes/feminist_evaluation

]]>
Evaluating Capacity Development Results https://coe-nepal.org.np/repository/evaluating-capacity-development-results/ Fri, 15 Sep 2017 10:14:04 +0000 http://repository.quackfoot.com/?p=100 […]]]>

Unlike programs supporting health, livelihoods, and other impact areas, capacity development does not have stand-alone outcomes. Instead, capacity development supports a diverse set of goals in different sectors, at different levels, through different activities. Capacity development consequently presents multiple monitoring and evaluation opportunities and challenges for practitioners.  Some important issues surrounding capacity development and evaluation are:

  • Differing interpretations of what ‘capacity’ means.  Lately, discussion has focused on ‘adaptive capacity,’ which allows local organizations to remain resilient to change.
  • Defining long-term outcomes of CD rather than just short-term outputs. For example, what kinds of outcomes can we achieve by helping an organization  build a strategic plan?
  • Establishing causal links between particular CD activities, improvements in organizational performance, and the target development impact.
  • Articulating a clear and logical theory of change, accompanied by realistic, need-driven targets rather than donor-driven activities and programs.
  • Choosing appropriate evaluation tools on a case-by-case basis.

Definitions of Capacity Development

The definition of Capacity Development, also referred to as Capacity Building, differs by source. Here are just a few examples:

  • The United Nations Development Program (UNDP) defines CD as “the process through which individuals, organisations, and societies obtain, strengthen, and maintain the capabilities to set and achieve their own development objectives over time.”
  • The Organisation for Economic Coordination and Development (OECD) defines CD as “the process whereby people, organisations and society as a whole unleash, strengthen, create, adapt and maintain capacity over time.”
  • The Canadian International Development Agency (CIDA) defines CD as “the activities, approaches, strategies, and methodologies which help organizations, groups and individuals to improve their performance, generate development benefits and achieve their objectives.”
  • PACT defines CD as “a continuous process that fosters the abilities and agency of individuals, organizations, and communities to overcome challenges and contribute towards positive social change. Though often developed in response to an immediate and specific issue, capacities are adaptable to future opportunities and challenges.”

A Capacity Development Framework

Beyond a basic definition, Pact has developed a comprehensive CD framework. Pact’s framework breaks capacity down into three parts, which together form the universe of capacity development interventions.

The first part of the capacity development framework describes the range of recipients for capacity development support. This includes individuals and organizations, networks and systems, and complex ecosystems of diverse actors engaged in development processes in multiple ways and with different perspectives on social change.  Traditionally, capacity development efforts have focused at the individual and organizational levels. Recently, however, capacity development practitioners are increasingly recognizing the importance of working at the system and network levels in order to bring multiple competencies to work on complex challenges.

Click this zoom link

The second part of the capacity development framework describes the range of methodologies for capacity development interventions. Capacity development interventions vary from expert-driven consultancy services and trainings to participant-driven peer-to-peer exchanges. The best capacity development programs employ a wide range of intervention types.  The interventions are chosen based on a deep understanding of an issue’s underlying causes and tailored to the local context. Traditionally, capacity development interventions have over-relied on big ticket events such as trainings and workshops.

Click this zoom link

The third part of Pact’s capacity development framework describes the range of capacities that we seek to develop. These include:

  • Technical capacities related to the impact area of any given intervention.
  • Operational capacities needed to accomplish individual tasks.
  • Systemic capacities to ensure that key functions are performed continuously over time.
  • Adaptive capacities to respond to changes in their operating environment.
  • Influencing capacities enabling an entity to bring about change within its environment.

Click this zoom link

Any or all of these capacities may be necessary within a given program or country context.

Other Frameworks for Evaluating Capacity Development

There are many other ways and frameworks to look at capacity development from various angles, and the choice of the framework heavily depends on the scope and type of the intervention:

The above list is not exhaustive and many more approaches to capacity development exist.

Describe Capacity Development

Due to some of the challenges outlined in the “Definition” section above, most capacity development measurements today still rely on anecdotal evidence of change and assess effectiveness through outputs like numbers of people trained or strategic plans developed. Many international, regional, and national institutions have designed an Organizational Capacity Assessment tool to measure capacity development to address this issue.

However, such tools are typically limited to short-term results of concrete activities (for example, setting up a new M&E system). The tools also rarely take into account the influence of the external environment, i.e. change in political, economic, legislative, cultural, and social spheres, on the entity whose capacity is being developed. Such assessments cannot demonstrate capacity strengthening outcomes: changes in how the organization behaves and functions, and consequently how capacity development impacts the lives of its targeted beneficiaries.

In order to understand the longer-term influence of capacity development on an entity, practitioners need to be able to see whether the entity has improved its performance over time. Pact’s theory of change (see graphic) connects organizational change at the output level (change in the systems, skills, and policies of entities) to changes at the impact level (influence at the community level) through measuring growth in organizational performance. Pact has developed the Organizational Performance Index (OPI) to measure this growth in each individual partner entity, and to analyze trends by country, region, and around the world.

A good capacity development evaluation literature review can be found in the materials of the European Commission’s Directorate General for Development and Cooperation.

Examples of New Approaches in CD Evaluation

There are a number of other organizations exploring new approaches to measuring and analyzing the outcomes of capacity development interventions: Root Change with its STAR approach, Global Giving and the Storytelling projectTCC Group and the Advocacy Capacity Assessment Tool, Foundation Strategy Group‘s (FSG) Shared Measurement systemDeloitte’s Maturity Model Assessment Tool, and the popular Outcome Mapping methodology.

Source:
Evaluating Capacity Development Results. (n.d.). Retrieved January 26, 2017, from http://betterevaluation.org/en/themes/capacitydevelopment

]]>
Evaluating Policy Influence and Advocacy https://coe-nepal.org.np/repository/evaluating-policy-influence-and-advocacy/ Fri, 15 Sep 2017 10:13:32 +0000 http://repository.quackfoot.com/?p=98 […]]]>

Influencing and informing policy is the main aim for many development organisations. However, activities directed at policy change are, in general, very hard to monitor and evaluate. As policy change is often a complex process, it is difficult to isolate the impact of a particular intervention from the influence of other factors and various actors. In addition, monitoring and evaluation tools usually used in managing interventions can be difficult to implement in these contexts.

Policy influencing techniques and approaches

Robust evaluation of policy influence is far from unachievable, however. In fact there have been numerous contributions to the field that help us unpack this tricky subject. Start and Hovland provide us with a good starting point by proposing a useful heuristic to understand the variety of policy influencing strategies that exist. They identify four categories of organisations depending on the option of influencing, and the role of rational evidence versus value or interest-based argument used by an organisation. The following diagram demonstrates how to find your organisation in this typology.

 

The four categories of policy influencing techniques and approaches characterised by Start and Hovland can serve as a starting point to find advocacy evaluation tools suitable for your organisation. Below we provide summaries of each approach.

Advising

Academic research is most commonly evaluated using academic peer reviews and a number of citations in research publications. However, for evaluation of policy influencing activities these options are clearly insufficient. As a lot of policy-oriented research institutions rely on donors or public funds, there is a growing need to demonstrate the actual influence of research projects on policy and practice.

Fred Carden, in a paper on this subject, discusses the issues with assessing the influence of research on policy change and emphasises the importance of the context of the situation. He notices however that the excessive inclusion of context in the evaluation increases the difficulty in claiming the impact of a particular intervention. He advocates for use-oriented approaches, which engage the users of evaluation findings at all levels improving validity of the evaluation.

For further practical advice, there are two tool kits well worth browsing: thiscomprehensive handbook by CIPPEC is designed to improve the performance of research institutions through developing a system of monitoring and evaluation of the impact of their own activities on policy change; and Ingie Hovland’s 2007 paper, Making a difference: M&E of policy research presents a number of evaluation options divided into 5 categories: strategy and direction, management, outputs, uptake, and outcomes and impacts.

Annette Boaz et al present a thorough review of a number of popular options used in evaluating the impact of research on policy (including ethnographic and quantitative approaches, focus groupsprocess tracing, and network mappingand analysis). They summarise the pros and cons of using qualitative and quantitative approaches and suggest that qualitative options, such as semi-structured interviews, documentary analysis, field visits and observations are more suitable for the analysis of research impact on policy, but they also acknowledge the value of mixed option approaches. They conclude with a very useful tool for helping design evaluations of this type, which involves 8 questions to consider:

  • What conceptual understanding of the relationship between knowledge and policy is informing the evaluation?
  • What are the outcomes of interest?
  • What options might be used to explore the outcomes  of interest?
  • How does the evaluation address issues of attribution?
  • What is the direction of travel for the evaluation?
  • Is this a mixed option approach, providing scope for triangulation?
  • Will the options selected capture the context and complexity of the research utilisation pathways, therefore helping to understand how (and whether) change has occurred?
  • Does the timing of the evaluation offer sufficient  time for change to occur, without compromising the likely recall capacities of respondents?

Advocacy

Policy influencing based on building public support for a new policy relies on public messaging and campaigning in order to engage large numbers of individuals. Although this approach has been used by various groups for a long time, measuring the actual influence of advocacy activities remains problematic. This is especially the case with changing public attitudes and preferences as there are multiple factors affecting people’s choices and behavioural change.

Harry Jones proposes a range of options that may be useful in assessing the impact of public campaigning. For example, surveys or focus groups may help to both measure and understand attitudes and preferences of a certain target group. It is also important to monitor the media as it may be crucial to explain behavioural change.

A good example of an advocacy evaluation is presented in a paper by Michael Quinn Patton.  He discusses a case in which he assessed the influence of judicial advocacy efforts targeted at the Supreme Court. The intervention’s impact was evaluated using evidence gathered through fieldwork (interviews, document analysis, detailed review of the Court arguments and decision, news analysis, and the documentation of the campaign itself), aiming at eliminating alternative or rival explanations until the most valid explanation remained (i.e. using forensic option or “modus operandi”approach).

In response to the difficulties with operationalising and measuring advocacy efforts, Coffman and Reed, in their paper ‘Unique Methods in Advocacy Evaluation’, describe four newly developed options created specifically for assessing advocacy and policy change. These are Bellwether Methodology, Policymaker Ratings, Intense Period Debriefs, and System Mapping. The Innovation Network have also published a case study example of system mapping used in an evaluation of the advocacy efforts of an international aid and relief organisation.

Lobbying

Lobbying is generally believed to be very hard to capture and analyse in a standardised way as it relies on informal interaction and takes place in highly fluid contexts. However, Harry Jones lists some tools that can be used to evaluate the contribution of lobbying in policy change, including recording observationsfrom meetings and negotiations, interviewing informants, and conducting qualitative, in-depth analysis of different aspects of lobbying activities.

In addition, dividing lobbying into smaller components may help to evaluate it more effectively. For example, Start and Hovland divide lobbying into three levels: Need to Know, Need to Inform, and Need to Negotiate. They also provide some useful tips for lobbyists, emphasizing the importance of planning a strategy, preparation, building relations and give suggestions on how to handle the lobbying outcomes.

The Centre for Lobbying in the Public Interest helps to improve the advocacy impact of non-profit organisations. They provide a short guide to evaluation of lobbyists work. They advise to evaluate the lobbyist work, skills and attributes using following categories: the ability to build relationships, perseverance, organizing the grassroots, coalition building, ability to motivate and communicate with various target groups, proficiency in the use of communication technologies, knowledge of the basics about the legislative process, and the organizational structure.

Activism

Activism aims at achieving change in policy through pressure. Therefore it usually uses confrontation as the option of advocacy strategy and works from outside of policy communities. Activism is an important part of obtaining policy change, yet there is very little academic work dedicated to the analysis of the tools and options used by activist organisations. Having said that, there are some online resources providing practical tips for nongovernmental organisations interested in pursuing direct advocacy.

Green Media Toolshed is committed to providing tools and improving the effectiveness of communications among environmental groups and the public. Their resources contain a collection of campaign manuals, handbooks, and planning tools targeted at civil society in developing world.

Resources

Overview

  • Overview of current advocacy evaluation practice – The Center for Evaluation Innovation with its regular advocacy evaluation updates, provides a repository of useful articles and information aimed at expanding the advocacy evaluation field.
  • Learning for Change: The Art of Assessing the Impact of Advocacy Work –  In this article the authors argue that standardised approaches to monitoring and evaluation of policy influence might misguide advocacy institutions, as they don’t take into account how complex is the process of policy change. They also emphasise the need for greater cooperation among NGOs since it is very difficult to assess the influence of one organisation in isolation from others advocating for the same issue.

Tools

  • Point K –  gives practical tools to help plan and evaluate programmes of non-governmental organisations.

Website

  • The Center for Evaluation Innovation–  with its regular advocacy evaluation updates, provides a repository of useful articles and information aimed at expanding the advocacy evaluation field

Examples​

Source:
Evaluating Policy Influence and Advocacy. (n.d.). Retrieved January 26, 2017, from http://betterevaluation.org/en/themes/policy_influence_advocacy

]]>
Evaluating the Performance of an Organization https://coe-nepal.org.np/repository/evaluating-the-performance-of-an-organization/ Fri, 15 Sep 2017 10:12:31 +0000 http://repository.quackfoot.com/?p=96 […]]]>

An organisational assessment is a systematic process for obtaining valid information about the performance of an organisation and the factors that affect performance.  It differs from other types of evaluations because the assessment focuses on the organisation as the primary unit of analysis.

Organisations are constantly trying to adapt, survive, perform and influence. However, they are not always successful. To better understand what they can or should change to improve their ability to perform, organisations can conduct organisational assessments.  This diagnostic tool can help organisations obtain useful data on their performance, identify important factors that aid or impede their achievement of results, and situate themselves with respect to competitors. Interestingly, the demand for such evaluations is gaining ground. Donors are increasingly trying to deepen their understanding of the performance of organisations which they fund (e.g., government ministries, International Financial Institutions and other multilateral organisations, NGOs, as well as research institutions) not only to determine the contributions of these organisations to development results, but also to better grasp the capacities these organisations have in place to support the achievement of results.

Example

Examples of Application of Organisational Assessments

  • The Multilateral Organisation Performance Assessment Network (MOPAN) is a group of 16 donor countries that have joined forces to assess the performance of the major multilateral organisations which they fund. MOPAN has developed an assessment approach that draws on perceptions and secondary data (i.e., documents) to assess the performance of organisations with a focus on their systems, behaviours, and practices (or capacities).  The exercise is used to encourage discussion among donors and multilateral organisations about ways to enhance organisational effectiveness.
  • In 2011, an evaluative report was disseminated by the International Monetary Fund (IMF) regarding its performance leading up to the global financial and economic crisis. Among the factors that hampered the organisation’s ability to detect important vulnerabilities and risks, the report highlights the pervasiveness of cognitive biases and groupthink as well as the operational structure of the organisation: on the one hand, it was widely believed in the organisation that a financial crisis could not happen in a large advanced economy and on the other, the existence of a silo mentality prevented information from being shared across units and departments to help predict the crisis.  The assessment results are being used by the IMF’s board and executive management to revise how the organisation operates.
  • The Center for Effective Philanthropy developed a conceptual framework for assessing the performance of foundations. This framework provides a way for a foundation to infer the social benefit created by its activities relative to the resources it invests, and aims to allow its leaders to understand the performance of their organisation over time and in relation to other foundations. In 2011, the center surveyed CEOs American foundations and found that nearly 50% of respondents conducted organisational assessments, notably to learn and improve their foundation’s future performance, to demonstrate accountability for their foundation’s use of resources, and to understand the impact of their foundation’s work.

Frameworks

A number of models or frameworks for conducting an organisational performance assessment exist. The choice of which framework (or combination of frameworks) to use depends on the nature of the organisation, on the purposeof the assessment, and on the context in which the assessed organisation operates. TheReflect and Learn website presents details on the rationale and particularities of various frameworks. As highlighted in a paper presented on the Impact Alliance website, it is important to note that different frameworks are underpinned by different philosophies and theories of organisational change; an organisation should choose a framework that is congruent with its own management beliefs and culture, to ensure that it fully engages in the process and truly benefits from the assessment (EDITOR: The link for this paper is no longer working. Please bear with us while we find an alternate link for this source. July 5, 2016).

One of the most comprehensive frameworks for Organisational Performance Assessment (OPA) is the Institutional and Organisational Assessment Model (IOA Model) elaborated by Universalia and the International Development Resource Centre (IDRC).  This model views the performance of an organisation as a multidimensional idea, that is, as the balance between the effectiveness, relevance, efficiency, and financial viability of the organisation (see schematic diagram below). The framework also posits that organisational performance should be examined in relation to the organisation’s motivation, capacity and external environment. Indeed, a review of the literature conducted as a preliminary step for developing the framework showed that organisations change: in response to factors in their external environment, because of changes in their internal resources (e.g., financial, technological, human), and as a result of fundamental shifts in values within the organisation, which in turn affect the organisational climate, culture and ways of operating. The book Organisational Assessment: A Framework for Improving Performance by Lusthaus et al. further details the IOA Model and discusses key methodological issues for carrying out an OPA. Meanwhile, Enhancing Organisational Performance: A Toolbox for Self-Assessment by Lusthaus et al. provides tools and tips for organisations wishing to conduct an OPA. Both these resources are publically available (in French and English) on the IDRC website.

 

Source: Universalia Institutional and Organisational Assessment Model (IOA Model)

Key resource: As highlighted above, the Reflect and Learn website presents a range of frameworks for conducting organisational performance assessments. The site also introduces the process and management of OPAs and provides a database of concrete tools that organisations can use to carry out assessments.

Important considerations.

A  Self-Assessment or an External Assessment?

A key decision that an organisation needs to make when undertaking an organisational assessment is whether to self-assess its performance, to commission an external assessment, or to use a combination of both approaches. Some advantages of a self-assessment are that it encourages the organisation’s ownership of the assessment, and thereby increases the latter’s acceptance of feedback and commitment to the evaluation’s recommendations. However, drawbacks of the self-assessment approach are that external stakeholders may question the independence or validity of the findings and may fear that hard issues will not be tackled, due to potential sensitivities within the organisation.

What issues to prioritize?

As underlined by Kathleen Immordino in her book on organisational assessment in the public sector, the questions an organisation needs to ask as part of an assessment depend on the specific context of the organisation. “In any complex organisation, there are innumerable ‘things’ that can be measured and studied. An effective assessment process focuses on those things that have the greatest impact on the way the organisation functions.” Practical considerations which may guide the selection or prioritization of key questions for an organisational assessment are: i) the time required and resources available to answer each question; ii) the organisation’s purpose for conducting an assessment (for example, a desire to strengthen accountability or a desire to inform a new strategic planning cycle); and iii) the need to balance the interests of multiple stakeholders.

Once an organisation has a clear picture of what it wants to measure, it will need to identify what indicators (quantitative and qualitative) to use to assess its performance. This can be one of the more challenging steps in the organisational assessment process as a plethora of potential indicators may appear useful, but weeding out the ones that really matter and that answer the assessment questions can be difficult.

Challenges of selecting indicators as part of the Organisational Assessment process

  • Measuring something within an organisation can increase its importance:  for example, a social service NGO that chooses to track the number of people it serves within a community may end up trying to increase the number of people it visits and to reduce the time spent with each person (with potential consequences for the quality of the services rendered).
  • Simple indicators may not always fit the bill and may need to be combined: Developing adequate indicators to measure the complex dynamics that exist within an organisation can be quite challenging. Organisations may develop a set of carefully considered indicators but need to modify them over time as they analyze their results.
  • Indicators may be interpreted differently amongst stakeholders: For example, an indicator that measures the diversification of funding of an organisation to assess its financial viability can be viewed positively by certain stakeholders, as diversification signifies that the organisation is not overly reliant on a single donor. Meanwhile, other stakeholders may view this measurement in a negative light, as dealing with multiple donors can lead to fragmentation and increased organisational costs in order to manage multiple donor requirements (each donor may have its own priorities, expectations, systems, and evaluation and reporting requirements).

What to consider in selecting the options?

Organisational assessments follow the tradition of a case study methodology.  A case study requires a research design that focuses on understanding the unit (the organisation) and can use a combination of qualitative and quantitative data.  The choice of options depends on the specific circumstance for the organisation and its stakeholders.    We have found that observation (site visits), document review, interviews and surveys are some of the most common options used. Site visits and observation provide vital information on the facilities, physical artifacts, and interactions between staff of an organisation. Meanwhile, a document review is used to follow the written record of the organisation: meeting minutes, reports, policies, etc. Interviews are a prime source of data for organisational assessments and should be conducted with a wide range of respondents (both male and female). Surveys are particularly useful for gathering data from a large number of people and for obtaining information regarding people’s attitudes, perceptions, opinions, preferences, beliefs, etc. These four data collection options can be used to triangulate information and validate conclusions: using more than one data source can help identify discrepancies between what people say and what people do, as well as between what the organisation is and what it ought to be.

Factors that support the use of results of the assessment

Organisational assessment results have a wide variety of uses. For instance, they can be used by an organisation to build its capacity, to validate its work, to promote dialog with funders or partners and to help devise its strategies for the future. However, to ensure that results of the organisational assessment are used, their use must be planned for by the organisation from the onset of the assessment, as well as considered throughout the implementation phase and even once reports have been submitted and disseminated. Some conditions which enhance the utilization of the results are when:

  • The purpose and benefits of the assessment are clear to the organisation’s stakeholders.
  • The main focus of the assessment is on learning rather than on accountability.
  • Internal leadership is identified to champion the process and results of the assessment.
  • The organisational culture is one that supports use of positive and negative feedback in planning and managing change
  • Stakeholders are involved in the assessment process (from the negotiation and planning stages).
  • Stakeholders see the assessment as relevant, credible, transparent, of high quality, and the findings have face validity.
  • The assessment team is able to communicate the intent of the assessment, their approach, and the results to senior staff and board members.
  • The report is timely (i.e., produced at an opportune time within the planning cycle of the organisation).
  • There is a process in place and resources allocated to implement and follow-up on the assessment’s recommendations.
  • Recommendations are realistic and feasible (for example, financially).

Resources

Examples

Guides

Overview

Websites

  • Reflect and Learn: this website presents details on the rationale and particularities of various frameworks for conducting an organisational performance assessment

Source:
Evaluating the Performance of an Organization. (n.d.). Retrieved January 26, 2017, from http://betterevaluation.org/en/theme/organizational_performance

]]>
Evaluation and Children https://coe-nepal.org.np/repository/evaluation-and-children/ Fri, 15 Sep 2017 10:11:48 +0000 http://repository.quackfoot.com/?p=94 […]]]>

Evaluating the impacts of programmes and policies on children presents particular challenges.  Many of these are complex and intersectoral, the impacts are often long-term, and children have particular vulnerabilities to harm.

This page provides some links to a growing set of resources and networks to support this work.  It includes information about evaluation for children – processes and measures for gathering and reporting data on behalf of children.  It includes information on evaluation with children – engaging them in the process of gathering and interpreting data.  And it includes information on evaluation bychildren – where children are engaged in the decision making about the evaluation and in using the findings from the evaluation.

Listening to smaller voices: using an innovative participatory tool for children affected by HIV and AIDS to assess a life skills programme

This paper details the evaluation of a Life Skills programme implemented by Family Health International (FHI 360), India. The evaluator and author, Sonal Zaveri, describes the evaluation process used to determine how the programme had changed (or not) the lives of children who were infected, orphaned, affected or vulnerable to HIV

View paper

Children and Evaluation: A webinar from BetterEvaluation and Community of Evaluators

 

Guides

  • Monitoring and Evaluating with Children: This guide, written by Grazyna Bonati for Plan Togo, provides guidance on including children in the monitoring and evaluation of development programs. It outlines the steps that need to be taken to involve children in each stage of the evaluation and describes specific examples of techniques that can be used to engage children in the process.
  • Child-to-Child: A Practical Guide Empowering Children as Active Citizens: This guide, written by Sara Gibbs, Gillian Mann and Nicola Mathers, outlines the Child-to-Child (CtC) approach to health and community development that is led by children. It describes the step-by-step process for implementing CtC and includes a range of tools and techniques for evaluation.

Frameworks

Toolkits

Examples

  • Evaluations of the Building Skills for Life programme in CambodiaZimbabweand Kenya: This series of three reports presents the results of three child-led evaluations of a multi-sectoral programme seeking to empower adolescent girls and address the challenges they face accessing quality education. The reports describe the process by which children beneficiaries of the programme selected evaluation questions, collected and analysed data in order to deliver an assessment of the programme’s results, effectiveness, efficiency, sustainability, relevance and equity.

Further Resources

  • Child-to-Child: Child-to-Child is an international network which works to promote children’s participation in health and education. Their website has a range of resources, many of which can be freely downloaded.
  • Child Rights International Network (CRIN): CRIN is an international children’s rights network campaigning for a change in the way children are viewed by government and society. Their website includes links to a range of resources and events related to child advocacy and rights.​
  • The Humanitarian Accountability Partnership (HAP) is a global partnership of humanitarian organisations dedicated to ensuring the needs of people affected by crises are met through the promotion of a Standard on Quality and Accountability. Their website provides a range of freely downloadable toolkits and guides aimed at improving quality, performance and accountability.
  • Plan International is an international development organisation which aims to promote and protect the rights of children around the world.  Their website provides a range of freely downloadable guides focused on working with children.
  • Save the Children is an international organisation which works to improve the lives of children through improved health, education and security.   Their website provides a range of freely downloadable resources which include issue briefs, research and guides.

Source:
Evaluation and Children. (n.d.). Retrieved January 26, 2017, from http://betterevaluation.org/en/themes/evaluation_and_children

]]>
Evaluation of agricultural projects and programs https://coe-nepal.org.np/repository/evaluation-of-agricultural-projects-and-programs/ Fri, 15 Sep 2017 10:11:14 +0000 http://repository.quackfoot.com/?p=92 […]]]>

Like other sectors such as health, education or public security for example, the agricultural sector and the activities carried out in its delivery, has its own characteristics and peculiarities. From its infancy around ten thousand years ago, agriculture has been about people – how they respond to their changing environment in ways that allow them to survive, organize, develop technologies, evolve socially, and prosper. Since its modest beginnings, the agricultural sector has become increasingly complex and multi-faceted. The challenge agriculture faces today is how to feed the world population in an equitable manner while protecting the environment from irreversible negative changes.

In its attempt to address this multidisciplinary challenge, agriculture must span the biological and earth sciences, engineering, the social sciences, and include anthropology and economics. This means that any given agricultural development project may include planning, target group inclusion, research, management, soil enhancement, agronomic practices, field operations, storage, processing, and distribution, as well as policy and regulatory demands, food security, the reduction of hunger, enhanced nutrition, incomes and living standards, the status and role of women, fair access to suitable agricultural land and the food harvested from it, and the protection of that land, water sources, and the broader environment.

Any single agricultural project or program is necessarily part of a highly complex, interrelated system. To deliver utility and value, evaluation in the agricultural sector must take into account contextually sensitive issues. This means that in addition to the adoption of the more generic evaluation approaches of the social sciences (including anthropology and economics), evaluation practice applied to agricultural research and development has also required the adaptation and development of more closely-tailored options and tools to meet its peculiar needs.

Within this complex background, evaluation in the agricultural sector may attempt to:

  • Re-examine, in the light of project developments, the adequacy of the project logic laid out in planning and appraisal documents
  • Determine the adequacy of the project to address and overcome the situational constraints and thereby promote the desired results
  • Determine deficiencies in results – and the reasons for them – by comparing actual achievements with those expected
  • Assess the efficiency and effectiveness of project activities and how these were managed
  • Determine the impacts of the project – both intended and unintended
  • Examine the results of the project by comparing winners and losers
  • Determine production increases and the reasons for these
  • Examine the economic efficiency of the project
  • Present the lessons learned from project implementation and the recommendations that follow from them.

 

References

Casley D.J. and Kumar, K. (1987) Project Monitoring and Evaluation in Agriculture. World Bank. John Hopkins University Press; Baltimore and London.

Casley D.J. and Kumar, K. (1988) The Collection, Analysis, and Use of Monitoring and Evaluation Data. World Bank.

Horton, D. Et al. (1993) Monitoring and Evaluating Agricultural Research. ISNAR/CAB International, Wallingford, UK.

Horton, D. Et al. (1994) Seguimiento y Evaluación de la Investigación Agropecuaria. ISNAR/CAB International. Tercer Mundo Editores, Santafé de Bogotá, Colombia.

Mackay, R. And Horton, D. (2010) Evaluating Agricultural Systems. Chapter 6, pp 159-204 in Anderson, G. Shaping International Evaluation: A 30-year Journey. UNIVERSALIA: Montreal and Ottawa.

Source:
Evaluation of agricultural projects and programs. (n.d.). Retrieved January 26, 2017, from http://betterevaluation.org/en/themes/agriculture

]]>