"Utilization Focused Evaluation (UFE) facilitates a learning process in which people in the real world apply evaluation findings and experiences to their work. The focus is on intended users. UFE does not prescribe any specific content, method, or theory. It is a guiding framework, rather than a met
...
hodology. UFE can include a wide variety of evaluation methods within an overall participatory paradigm. Decision making, in consultation with those who can benefit from the evaluation, is an important part of the process. Intended users will more likely utilize an evaluation in which they have ownership. This Primer is for practitioner evaluators and project implementers who have heard of UFE and are keen to test-drive the approach." (Lead)
more
"When done well, evaluation for learning can help grantmakers, their grantees and their partners improve outcomes on the ground in real time. But doing it well requires that we work with key stakeholders to develop the leadership, the strategies and the systems that facilitate true learning.
1. LEAD
...
. Create a culture where evaluation is an everyday priority and where it supports and advances continuous learning. Build commitment to evaluation for learning from your board and staff leaders and create spaces for key stakeholders to reflect on your work. (Page 7)
2. PLAN. Develop a framework to ensure you, your grantees and your partners are “evaluating with a purpose.” Determine what your stakeholders need to understand in order to do a better job and develop ways that ensure everyone is gaining this knowledge on an ongoing basis. (Page 12)
3. ORGANIZE. Ensure you and your grantees have the necessary infrastructure to support your plan. This means establishing the right skills, processes and technology to make evaluation for learning an ongoing priority. (Page 16)
4. SHARE. Collaborate with grantees, grantmaking colleagues and others to ensure that evaluation is producing meaningful results. Involve grantees and partners when developing or reviewing strategies, share lessons on an ongoing basis with key audiences and engage in open relationships with grantees to support learning. (Page 23)
The goal of this guide is to provide grantmakers with ideas and insights so they can develop and strengthen their capacities in each of these four areas. Each section presents key action steps for grantmakers, along with examples of a variety of grantmakers engaged in this work. The fictional story of Anytown Foundation also illustrates how a foundation might build the four essential evaluation elements." (Publisher description)
more
"Mixed methods (MM) evaluations seek to integrate social science disciplines with predominantly quantitative (QUANT) and predominantly qualitative (QUAL) approaches to theory, data collection, data analysis and interpretation. The purpose is to strengthen the reliability of data, validity of the fin
...
dings and recommendations, and to broaden and deepen our understanding of the processes through which program outcomes and impacts are achieved, and how these are affected by the context within which the program is implemented. While mixed methods are now widely used in program evaluation, and evaluation RFPs frequently require their use, many evaluators do not utilize the full potential of the MM approach."
more
"This guide will take you through the essential steps for designing an evaluation of your community information project. These steps explain what to do and consider at different stages of the evaluation process: 1. Describe your project and identify your target audience. 2. Identify the evaluation
...
s purpose and key questions. 3. Design the evaluation using effective methods. 4. Communicate and report the evaluation findings to make decisions and take action. We have included tips, tools and examples from community information projects that are currently being implemented by several grantees of the John S. and James L. Knight Foundation’s Community Information Challenge (KCIC)." (Introduction, page 4)
more
"Although the argument for evaluating advocacy is convincing, advocacy has long been considered “too hard to measure,” and so far relatively few advocates, funders, or evaluators have taken on the challenge. But this is now changing. Interest in advocacy evaluation is surging and cutting-edge ad
...
vocates are embracing evaluation as a critical part of their work. The main barrier preventing more organizations from using evaluation is a lack of familiarity with how to think about and design evaluations of advocacy efforts that are useful, manageable, and resource-efficient. Even knowing where to start can be a challenge. This tool was developed to help address that gap in knowledge. It guides users through four basic steps of advocacy evaluation planning." (Page 3)
more
"This guide aims to provide practitioners with a broad framework for carrying out project level Participatory Impact Assessments (PIA) of livelihoods interventions in the humanitarian sector. Other than in some health, nutrition, and water interventions in which indicators of project performance sho
...
uld relate to international standards, for many interventions there are no ‘gold standards’ for measuring project impact. For example, the Sphere handbook has no clear standards for food security or livelihoods interventions. This guide aims to bridge this gap by outlining a tried and tested approach to measuring the impact of livelihoods projects. The guide does not attempt to provide a set of standards or indicators or blueprint for impact assessment, but a broad and flexible framework which can be adapted to different contexts and project interventions. Consistent with this, the proposed framework does not aim to provide a rigid or detailed step by step formula, or set of tools to carry out project impact assessments, but describes an eight stage approach, and presents examples of tools which may be adapted to different contexts. One of the objectives of the guide is to demonstrate how PIA can be used to overcome some of the inherent weaknesses in conventional humanitarian monitoring evaluation and impact assessment approaches, such as; the emphasis on measuring process as opposed to real impact, the emphasis on external as opposed to community based indicators of impact, and how to overcome the issue of weak or non-existent baselines. The guide also aims to demonstrate and provide examples of how participatory methods can be used to overcome the challenge of attributing impact or change to actual project activities. The guide will also demonstrate how data collected from the systematic use of participatory tools can be presented numerically, and can give representative results and provide evidence based data on project impact." (Introduction, page 6)
more
"This handbook can stand alone as a guide on how to design and construct a results-based M&E system in the public sector. It can also be used in conjunction with a workshop developed at the World Bank entitled “Designing and Building a Results-Based Monitoring and Evaluation System: A Tool for Pub
...
lic Sector Management.” The goal of the handbook is to help prepare you to plan, design, and implement a results-based M&E system within your organization. In addition, the handbook will also demonstrate how an M&E system can be a valuable tool in supporting good public management. The focus of the handbook is on a comprehensive ten-step model that will help guide you through the process of designing and building a results-based M&E system. These steps will begin with a “Readiness Assessment” and will take you through the design, management, and, importantly, the sustainability of your M&E system. The handbook will describe these steps in detail, the tasks needed to complete them, and the tools available to help you along the way." (Preface, page xii)
more
"This publication, Guidelines for Evaluators, is the first in the companion series to the Handbook on Monitoring and Evaluating for Results. The Handbook provides an overall context for results-based monitoring and evaluation and reviews tools and techniques for planning and managing monitoring and
...
evaluation activities. The Handbook, however, is primarily directed to programme managers. This publication is specifically for those who actually conduct evaluations. It complements the Handbook by providing guidance on outcome evaluations as they relate to the evaluators. Guidelines for Evaluators presents a conceptual review of outcome evaluation, identifies key differences between outcome and project evaluations, and provides a framework methodology for conducting outcome evaluations. It also includes a sample outline for an outcome evaluation report." (Introduction, page 5)
more
"The most important point for all types of workshops is that the participation of the representatives of all relevant stakeholder groups is ensured. This goes beyond mere discussion: at some stage there is always the necessity to make binding decisions. At this point, the workshop participants must
...
be empowered by their respective organisations and/or groups (constituencies) to make such binding decisions (commitments concerning the use of people, materials, equipment, time, and money).
A second important point for all types of workshops is the recommended use of a team of two facilitators. Years of experience in the context of organisations in the field of development have clearly demonstrated the advantage of having two facilitators rather than only one: a single facilitator simply cannot keep track of all the details of the group processes and, at the same time, keep the discussion focussed along the lines of the previously agreed agenda. Therefore, it is essential that the two facilitators compare their perceptions in the breaks between sessions and take turns in facilitating. Their perceptions thus gain in objectivity and their activity is less influenced by the emotional and cognitive strain that group processes invariably produce.
It should also be obvious that the less the facilitators are directly involved in the project under discussion, the more efficiently they will work. The more they are "outsiders," the more impartial they can be towards the expression of (sometimes diverging) interests in the processes of discussion and negotiation that are the essence of the MAPA-PROJECT workshops. This will not only increase the trust of all participants in the results of the workshop (i.e. the project plan), it will also enhance the credibility of this plan in the eyes of outside organisations, such as a funding organisation. For the same reason it is often advisable to conduct the workshop on "neutral grounds" (i.e., in a location different from that of the organisation which will be running the project)." (Overview, page 21)
more
"Written in an easy language, this manual offers the reader clear action steps to be taken: the planning phase (identifying stakeholders, developing evaluation questions, budgeting, selecting an evaluator), the implementation (determining data-collection methods, collecting data, analyzing and inter
...
preting data) and the utilisation phase (communicating findings and utilising the process and results)." (commbox)
more