"There are various aspects of evaluation reporting that can affect how information is used. Stakeholder needs, the evaluation purpose, and target audience should be considered when communicating results. Evaluation reporting should not only identify what, when, how, and to what extent information sh
...
ould be shared but take into account how information might be received and used. In a 2006 survey of American Evaluation Association members, 68% self-reported that their evaluation results were not used. Findings such as this suggest a greater need for evaluation results to make it off the bookshelf and into the hands of intended audiences. Similarly in the CDC Framework for Program Evaluation, the “utility evaluation standard” charges evaluators to carry out evaluations that lead to actionable findings for intended users. This commitment to conducting evaluations that improve the lives of participants serves as the inspiration for this guide." (Introduction)
more
"The goal of this document is to improve the effectiveness of DFID programmes and the measurement of their impacts by providing DFID Advisers with the practical skills to develop high quality theories of change, to understand the role they play in programme design and assessment. It is intended for
...
DFID advisors to more clearly and explicitly articulate their theories of change as a means of improving the effectiveness of interventions. Part I first explores the fundamentals of theories of change: what they are, why they are important, and how to create a theory of change. It explores theories of change at different levels, and concludes with advice on how theories of change can enhance the effectiveness and relevance of programming. Part II continues to build upon Part I by focusing on how theories of change can be used in the monitoring and evaluation stages of the project cycle. It provides practical guidance on how and why to use theories of change-focused monitoring and evaluation strategies, particularly exploring the ways in which theories of change can be included in any evaluation approach." (Document summary, page 3)
more
"Utilization Focused Evaluation (UFE) facilitates a learning process in which people in the real world apply evaluation findings and experiences to their work. The focus is on intended users. UFE does not prescribe any specific content, method, or theory. It is a guiding framework, rather than a met
...
hodology. UFE can include a wide variety of evaluation methods within an overall participatory paradigm. Decision making, in consultation with those who can benefit from the evaluation, is an important part of the process. Intended users will more likely utilize an evaluation in which they have ownership. This Primer is for practitioner evaluators and project implementers who have heard of UFE and are keen to test-drive the approach." (Lead)
more
"Liliana Rodríguez-Campos and Rigoberto Rincones-Gómez present their Model for Collaborative Evaluations (MCE) with its six major components: identify the situation, clarify the expectations, establish a collective commitment, ensure open communication, encourage effective practices, and follow sp
...
ecific guidelines. Fully updated to reflect the state-of-the-art in the field, each core chapter addresses one component of the model, providing step-by-step guidance, as well as helpful tips for successful application. To further demonstrate the utility of the MCE, this new edition includes recurring vignettes about several evaluators and clients, illustrating frequent questions and specific challenges that arise when evaluators take a collaborative approach. Drawing on a wide range of collaborative evaluations conducted in the business, nonprofit, and education sectors, this precise and easy-to-understand guide is ideal for students and practitioners who want to use its tools immediately." (Publisher description)
more
"Quality assurance (QA) systems applied in educational contexts are generally concerned with inputs — how much money is spent, what staffing, resources and support are provided, what kinds of teaching and learning are involved, and so on. There is an assumption — not always fulfilled — that th
...
e higher the standards of the inputs, the higher the quality of the outputs. In this toolkit, we propose a different approach: the evaluation of the programmes’ outcomes, outputs and impacts. We examine the differences between informal and self-directed learning, nonformal education and training (NFE) and formal education; provide examples of NFE programmes using a variety of face-to-face, distance education and technology-based teaching and learning methods; examine the approaches to QA that are required in NFE; consider the outputs, outcomes and impacts that can be achieved in NFE programmes; propose the adoption of a rigorous but simple-to-use QA framework which is based on outputs, outcomes and impacts." (Back cover)
more
"This first guidance note, Introduction to Impact Evaluation, provides an overview of impact evaluation, explaining how impact evaluation differs from – and complements – other types of evaluation, why impact evaluation should be done, when and by whom. It describes different methods, approaches
...
and designs that can be used for the different aspects of impact evaluation: clarifying values for the evaluation, developing a theory of how the intervention is understood to work, measuring or describing impacts and other important variables, explaining why impacts have occurred, synthesizing results, and reporting and supporting use. The note discusses what is considered good impact evaluation – evaluation that achieves a balance between the competing imperatives of being useful, rigorous, ethical and practical – and how to achieve this. Footnotes throughout the document contain references for further reading in specific areas." (Introduction, page 1)
more
"The main aim of this study is to examine and document the current state of how communicators and evaluators share results of development co-operation and specifically the outcomes of evaluations, including ‘negative’ or ‘sensitive’ results. Related to this, the study will shed light on the
...
state of evaluation of the efforts undertaken by OECD aid agency communicators themselves: looking at whether, to what extent and how donors monitor measure and evaluate the effectiveness and impact of their communication strategies. First, the study will highlight key trends that have shaped the communication context for development evaluation. Second, and in separate sections, it will reflect on the evaluation and communication perspective around communicating results. For each of the two disciplines, and with reference to results of recent surveys, the paper will reflect on questions of definition, mandate, track record, challenges, and status of collaboration, and reference examples of emerging good and bad practice. Third, it will highlight the dynamics around communication of ‘negative’ or ‘sensitive’ evidence, identified by both evaluators and communicators as among the biggest challenges to be addressed. Fourth, it will look at how systematically agency communication strategies and initiatives are evaluated, exploring the extent to which evaluators and communicators work together in assessing these strategies. Fifth, the study will reflect on the experience to date in involving partner countries in communicating evaluation results, before concluding with a series of proposals aimed at improving collaboration between evaluators and communicators." (Introduction)
more
"When done well, evaluation for learning can help grantmakers, their grantees and their partners improve outcomes on the ground in real time. But doing it well requires that we work with key stakeholders to develop the leadership, the strategies and the systems that facilitate true learning.
1. LEAD
...
. Create a culture where evaluation is an everyday priority and where it supports and advances continuous learning. Build commitment to evaluation for learning from your board and staff leaders and create spaces for key stakeholders to reflect on your work. (Page 7)
2. PLAN. Develop a framework to ensure you, your grantees and your partners are “evaluating with a purpose.” Determine what your stakeholders need to understand in order to do a better job and develop ways that ensure everyone is gaining this knowledge on an ongoing basis. (Page 12)
3. ORGANIZE. Ensure you and your grantees have the necessary infrastructure to support your plan. This means establishing the right skills, processes and technology to make evaluation for learning an ongoing priority. (Page 16)
4. SHARE. Collaborate with grantees, grantmaking colleagues and others to ensure that evaluation is producing meaningful results. Involve grantees and partners when developing or reviewing strategies, share lessons on an ongoing basis with key audiences and engage in open relationships with grantees to support learning. (Page 23)
The goal of this guide is to provide grantmakers with ideas and insights so they can develop and strengthen their capacities in each of these four areas. Each section presents key action steps for grantmakers, along with examples of a variety of grantmakers engaged in this work. The fictional story of Anytown Foundation also illustrates how a foundation might build the four essential evaluation elements." (Publisher description)
more
"Mixed methods (MM) evaluations seek to integrate social science disciplines with predominantly quantitative (QUANT) and predominantly qualitative (QUAL) approaches to theory, data collection, data analysis and interpretation. The purpose is to strengthen the reliability of data, validity of the fin
...
dings and recommendations, and to broaden and deepen our understanding of the processes through which program outcomes and impacts are achieved, and how these are affected by the context within which the program is implemented. While mixed methods are now widely used in program evaluation, and evaluation RFPs frequently require their use, many evaluators do not utilize the full potential of the MM approach."
more
"This guidance aims to help improve programme design and management and strengthen the use of evaluation in order to enhance the quality of conflict prevention and peacebuilding work. It seeks to guide policy makers and country partners, field and programme officers, evaluators and other stakeholder
...
s engaged in settings of conflict and fragility by supporting a better, shared understanding of the role and utility of evaluations, outlining key dimensions of planning for them, setting them up, and carrying them out. This guidance is to be used for assessing activities (policies, programmes, strategies or projects) in settings of violent conflict or state fragility, such as peacebuilding and conflict prevention work and development and humanitarian activities that may or may not have specific peace-related objectives. This encompasses the work of local, national, regional and non-governmental actors, in addition to development co-operation activities. The central principles and concepts in this guidance, including conflict sensitivity and the importance of understanding and testing underlying theories about what is being done and why, are applicable to a range of actors." (Executive summary, page 8)
more
"Based on Michael Quinn Patton's Utilization Focused Evaluation, this briefer and more condensed 'essentials' book provides both an overall framework and concrete advice for how to conduct useful evaluations that actually get used. This book integrates both theory and practice and is not only based
...
on research, but also professional experience." (Publisher description)
more
"The Nonprofit Outcomes Toolbox identifies stages in the use of outcomes and shows you how to use specific facets of existing outcome models to improve performance and achieve meaningful results. Going beyond the familiar limits of the sector, this volume also illustrates how tools and approaches lo
...
ng in use in the corporate sector can be of great analytical and practical use to nonprofit, philanthropic, and governmental organizations." (Publisher description)
more
"Developmental evaluation offers a powerful approach to monitoring and supporting social innovations by working in partnership with program decision makers. In this book, eminent authority Michael Quinn Patton shows how to conduct evaluations within a developmental evaluation framework. Patton draws
...
on insights about complex dynamic systems, uncertainty, nonlinearity, and emergence. He illustrates how developmental evaluation can be used for a range of purposes: ongoing program development, adapting effective principles of practice to local contexts, generating innovations and taking them to scale, and facilitating rapid response in crisis situations. Students and practicing evaluators will appreciate the book's extensive case examples and stories, cartoons, clear writing style, "closer look" sidebars, and summary tables." (Publisher description)
more
"This book offers an accessible introduction to the topic of impact evaluation and its practice in development. Although the book is geared principally toward development practitioners and policy makers, we trust that it will be a valuable resource for students and others interested in impact evalua
...
tion. Prospective impact evaluations assess whether or not a program has achieved its intended results or test alternative strategies for achieving those results. We consider that more and better impact evaluations will help strengthen the evidence base for development policies and programs around the world [...] The three parts in this handbook provide a nontechnical introduction to impact evaluations, discussing what to evaluate and why in part 1; how to evaluate in part 2; and how to implement an evaluation in part 3. These elements are the basic tools needed to successfully carry out an impact evaluation." (Preface)
more
"This guide will take you through the essential steps for designing an evaluation of your community information project. These steps explain what to do and consider at different stages of the evaluation process: 1. Describe your project and identify your target audience. 2. Identify the evaluation
...
s purpose and key questions. 3. Design the evaluation using effective methods. 4. Communicate and report the evaluation findings to make decisions and take action. We have included tips, tools and examples from community information projects that are currently being implemented by several grantees of the John S. and James L. Knight Foundation’s Community Information Challenge (KCIC)." (Introduction, page 4)
more
"Too often evaluations are shelved, with very little done to bring about change within organisations. This guide will explain how you can make your evaluations more useful. It will help you to better understand some conceptual issues and appreciate how evaluations contribute to empowering stakeholde
...
rs. This practical guide brings together evaluation concepts, methods and tools that work well in the field and presents core principles for guiding evaluations that matter; provides a framework for designing and facilitating evaluations; shows you how to get your primary intended users and other key stakeholders to contribute effectively to the evaluation process; offers ideas for turning evaluations into learning processes. Making evaluations matter to the primary intended users of development programmes is at the heart of this book – a must-read for evaluators, commissioners, monitoring and evaluation officers and key stakeholders within the international development sector." (Back cover)
more
"This book off ers an accessible introduction to the topic of impact evaluation and its practice in development. Although the book is geared principally toward development practitioners and policy makers, we trust that it will be a valuable resource for students and others interested in impact evalu
...
ation. Prospective impact evaluations assess whether or not a program has achieved its intended results or test alternative strategies for achieving those results. We consider that more and better impact evaluations will help strengthen the evidence base for development policies and programs around the world. Our hope is that if governments and development practitioners can make policy decisions based on evidence—including evidence generated through impact evaluation—development resources will be spent more eff ectively to reduce poverty and improve people’s lives. The three parts in this handbook provide a nontechnical introduction to impact evaluations, discussing what to evaluate and why in part 1; how to evaluate in part 2; and how to implement an evaluation in part 3. These elements are the basic tools needed to successfully carry out an impact evaluation. The approach to impact evaluation in this book is largely intuitive, and we attempt to minimize technical notation. We provide the reader with a core set of impact evaluation tools—the concepts and methods that underpin any impact evaluation—and discuss their application to real-world development operations. The methods are drawn directly from applied research in the social sciences and share many commonalities with research methods used in the natural sciences. In this sense, impact evaluation brings the empirical research tools widely used in economics and other social sciences together with the operational and political-economy realities of policy implementation and development practice.
From a methodological standpoint, our approach to impact evaluation is largely pragmatic: we think that the most appropriate methods should be identified to fit the operational context, and not the other way around. This is best achieved at the outset of a program, through the design of prospective impact evaluations that are built into the project’s implementation. We argue that gaining consensus among key stakeholders and identifying an evaluation design that fits the political and operational context are as important as the method itself. We also believe strongly that impact evaluations should be candid about their limitations and caveats. Finally, we strongly encourage policy makers and program managers to consider impact evaluations in a logical framework that clearly sets out the causal pathways by which a program works to produce outputs and influence final outcomes, and to combine impact evaluations with monitoring and complementary evaluation approaches to gain a full picture of performance." (Preface)
more