"The book begins with an overview of the evaluation field and program evaluation standards, and proceeds to cover the most widely used evaluation approaches. With new evaluation designs and the inclusion of the latest literature from the field, this second edition is an essential update for professi
...
onals and students who want to stay current. Understanding and choosing evaluation approaches is critical to many professions, and Evaluation Theory, Models, and Applications, Second Edition is the benchmark evaluation guide. Authors Daniel L. Stufflebeam and Chris L. S. Coryn, widely considered experts in the evaluation field, introduce and describe 23 program evaluation approaches, including, new to this edition, transformative evaluation, participatory evaluation, consumer feedback, and meta-analysis. Evaluation Theory, Models, and Applications, Second Edition facilitates the process of planning, conducting, and assessing program evaluations. The highlighted evaluation approaches include: Experimental and quasi-experimental design evaluations Daniel L. Stufflebeam's CIPP Model Michael Scriven's Consumer-Oriented Evaluation Michael Patton's Utilization-Focused Evaluation Robert Stake's Responsive/Stakeholder-Centered Evaluation Case Study Evaluation Key readings listed at the end of each direct readers to the most important references for each topic. Learning objectives, review questions, student exercises, and instructor support materials complete the collection of tools. Choosing from evaluation approaches can be an overwhelming process, but Evaluation Theory, Models, and Applications, Second Edition updates the core evaluation concepts with the latest research, making this complex field accessible in just one book." (Publisher description)
more
"The BetterEvaluation Rainbow Framework prompts you to think about a series of key questions. It is important to consider all these issues, including reporting, at the beginning of an evaluation. The Framework can be used to plan an evaluation or to locate information about particular types of metho
...
ds. An expanded version of the framework showing options or methods for each question can be downloaded from our website: http://betterevaluation.org/plan." (Introduction)
more
"A comprehensive, practical guide to the on-the-ground tasks of evaluating and monitoring democracy assistance programs, from planning and implementation to preparing and presenting evaluation reports." (Publisher description)
"[...] Amongst those agencies the following OECD DAC definition of evaluability is widely accepted and has been applied within this report: “The extent to which an activity or project can be evaluated in a reliable and credible fashion”.
Eighteen recommendations about the use of Evaluability Ass
...
essments are presented here, based on the synthesis of the literature in the main body of the report. The report is supported by annexes, which include an outline structure for Terms of Reference for an Evaluability Assessment.
An Evaluability Assessment should examine evaluability: (a) in principle, given the nature of the project design, and (b) in practice, given data availability to carry out an evaluation and the systems able to provide it. In addition it should examine the likely usefulness of an evaluation. Results of an Evaluability Assessment should have consequences: for the design of an evaluation, the design of an M&E Framework, or the design of the project itself. An Evaluability Assessment should not be confused with an evaluation (which should deliver the evaluative judgements about project achievements).
Many problems of evaluability have their origins in weak project design. Some of these can be addressed by engagement of evaluators at the design stage, through evaluability checks or otherwise. However project design problems are also likely to emerge during implementation, for multiple reasons. An Evaluability Assessment during implementation should include attention to project design and it should be recognised that this may lead to a necessary re-working of the intervention logic." (Executive summary, page 1)
more
"Utilization Focused Evaluation (UFE) facilitates a learning process in which people in the real world apply evaluation findings and experiences to their work. The focus is on intended users. UFE does not prescribe any specific content, method, or theory. It is a guiding framework, rather than a met
...
hodology. UFE can include a wide variety of evaluation methods within an overall participatory paradigm. Decision making, in consultation with those who can benefit from the evaluation, is an important part of the process. Intended users will more likely utilize an evaluation in which they have ownership. This Primer is for practitioner evaluators and project implementers who have heard of UFE and are keen to test-drive the approach." (Lead)
more
"When done well, evaluation for learning can help grantmakers, their grantees and their partners improve outcomes on the ground in real time. But doing it well requires that we work with key stakeholders to develop the leadership, the strategies and the systems that facilitate true learning.
1. LEAD
...
. Create a culture where evaluation is an everyday priority and where it supports and advances continuous learning. Build commitment to evaluation for learning from your board and staff leaders and create spaces for key stakeholders to reflect on your work. (Page 7)
2. PLAN. Develop a framework to ensure you, your grantees and your partners are “evaluating with a purpose.” Determine what your stakeholders need to understand in order to do a better job and develop ways that ensure everyone is gaining this knowledge on an ongoing basis. (Page 12)
3. ORGANIZE. Ensure you and your grantees have the necessary infrastructure to support your plan. This means establishing the right skills, processes and technology to make evaluation for learning an ongoing priority. (Page 16)
4. SHARE. Collaborate with grantees, grantmaking colleagues and others to ensure that evaluation is producing meaningful results. Involve grantees and partners when developing or reviewing strategies, share lessons on an ongoing basis with key audiences and engage in open relationships with grantees to support learning. (Page 23)
The goal of this guide is to provide grantmakers with ideas and insights so they can develop and strengthen their capacities in each of these four areas. Each section presents key action steps for grantmakers, along with examples of a variety of grantmakers engaged in this work. The fictional story of Anytown Foundation also illustrates how a foundation might build the four essential evaluation elements." (Publisher description)
more
"Based on Michael Quinn Patton's Utilization Focused Evaluation, this briefer and more condensed 'essentials' book provides both an overall framework and concrete advice for how to conduct useful evaluations that actually get used. This book integrates both theory and practice and is not only based
...
on research, but also professional experience." (Publisher description)
more
"This guidance aims to help improve programme design and management and strengthen the use of evaluation in order to enhance the quality of conflict prevention and peacebuilding work. It seeks to guide policy makers and country partners, field and programme officers, evaluators and other stakeholder
...
s engaged in settings of conflict and fragility by supporting a better, shared understanding of the role and utility of evaluations, outlining key dimensions of planning for them, setting them up, and carrying them out. This guidance is to be used for assessing activities (policies, programmes, strategies or projects) in settings of violent conflict or state fragility, such as peacebuilding and conflict prevention work and development and humanitarian activities that may or may not have specific peace-related objectives. This encompasses the work of local, national, regional and non-governmental actors, in addition to development co-operation activities. The central principles and concepts in this guidance, including conflict sensitivity and the importance of understanding and testing underlying theories about what is being done and why, are applicable to a range of actors." (Executive summary, page 8)
more
"Developmental evaluation offers a powerful approach to monitoring and supporting social innovations by working in partnership with program decision makers. In this book, eminent authority Michael Quinn Patton shows how to conduct evaluations within a developmental evaluation framework. Patton draws
...
on insights about complex dynamic systems, uncertainty, nonlinearity, and emergence. He illustrates how developmental evaluation can be used for a range of purposes: ongoing program development, adapting effective principles of practice to local contexts, generating innovations and taking them to scale, and facilitating rapid response in crisis situations. Students and practicing evaluators will appreciate the book's extensive case examples and stories, cartoons, clear writing style, "closer look" sidebars, and summary tables." (Publisher description)
more
"This guide will take you through the essential steps for designing an evaluation of your community information project. These steps explain what to do and consider at different stages of the evaluation process: 1. Describe your project and identify your target audience. 2. Identify the evaluation
...
s purpose and key questions. 3. Design the evaluation using effective methods. 4. Communicate and report the evaluation findings to make decisions and take action. We have included tips, tools and examples from community information projects that are currently being implemented by several grantees of the John S. and James L. Knight Foundation’s Community Information Challenge (KCIC)." (Introduction, page 4)
more
"The Guide presents principles, minimum standards and best practices, business processes, references and tools deemed important for effective, efficient, and sustainable organizations. The Guide consists of ten chapters that cover the key functional areas of most organizations. Each chapter (and ind
...
eed each step and process within each chapter) can be used as a stand-alone document. With the exception of Chapter 3, Health and Human Services Regulations, the chapters in the Guide can be used by many different types of organizations, in many sectors (such as agriculture, health, peacebuilding, water supply, nutrition, education, or environment.) CRS offers the Guide as an adaptable tool which may be used to develop new, or strengthen existing, policies, processes and practices." (About the guide, page 7)
more
"Too often evaluations are shelved, with very little done to bring about change within organisations. This guide will explain how you can make your evaluations more useful. It will help you to better understand some conceptual issues and appreciate how evaluations contribute to empowering stakeholde
...
rs. This practical guide brings together evaluation concepts, methods and tools that work well in the field and presents core principles for guiding evaluations that matter; provides a framework for designing and facilitating evaluations; shows you how to get your primary intended users and other key stakeholders to contribute effectively to the evaluation process; offers ideas for turning evaluations into learning processes. Making evaluations matter to the primary intended users of development programmes is at the heart of this book – a must-read for evaluators, commissioners, monitoring and evaluation officers and key stakeholders within the international development sector." (Back cover)
more
"This book off ers an accessible introduction to the topic of impact evaluation and its practice in development. Although the book is geared principally toward development practitioners and policy makers, we trust that it will be a valuable resource for students and others interested in impact evalu
...
ation. Prospective impact evaluations assess whether or not a program has achieved its intended results or test alternative strategies for achieving those results. We consider that more and better impact evaluations will help strengthen the evidence base for development policies and programs around the world. Our hope is that if governments and development practitioners can make policy decisions based on evidence—including evidence generated through impact evaluation—development resources will be spent more eff ectively to reduce poverty and improve people’s lives. The three parts in this handbook provide a nontechnical introduction to impact evaluations, discussing what to evaluate and why in part 1; how to evaluate in part 2; and how to implement an evaluation in part 3. These elements are the basic tools needed to successfully carry out an impact evaluation. The approach to impact evaluation in this book is largely intuitive, and we attempt to minimize technical notation. We provide the reader with a core set of impact evaluation tools—the concepts and methods that underpin any impact evaluation—and discuss their application to real-world development operations. The methods are drawn directly from applied research in the social sciences and share many commonalities with research methods used in the natural sciences. In this sense, impact evaluation brings the empirical research tools widely used in economics and other social sciences together with the operational and political-economy realities of policy implementation and development practice.
From a methodological standpoint, our approach to impact evaluation is largely pragmatic: we think that the most appropriate methods should be identified to fit the operational context, and not the other way around. This is best achieved at the outset of a program, through the design of prospective impact evaluations that are built into the project’s implementation. We argue that gaining consensus among key stakeholders and identifying an evaluation design that fits the political and operational context are as important as the method itself. We also believe strongly that impact evaluations should be candid about their limitations and caveats. Finally, we strongly encourage policy makers and program managers to consider impact evaluations in a logical framework that clearly sets out the causal pathways by which a program works to produce outputs and influence final outcomes, and to combine impact evaluations with monitoring and complementary evaluation approaches to gain a full picture of performance." (Preface)
more
"Agency-based Program Evaluation: Lessons from Practice, by Stephen A. Kapp and Gary R. Anderson, serves as a core textbook in the advanced undergraduate and graduate social work program evaluation courses. It combines the methodology of program evaluation with the reality of working with agencies a
...
nd organizations to describe the effectiveness of their services and programs. Students will gain an understanding of the political and social context and pressures in which a program is developed, implemented and evaluated. This book offers a practice-oriented approach to evaluation. While many program evaluation methods texts often add a or brief sections that describe organizational and political factors, this book begins with the context of an agency-based evaluation and describes the method within that context. Students will gain a more complete understanding of this contextual challenge and will learn techniques for operating in the face of these challenges." (Publisher description)
more
"Although the argument for evaluating advocacy is convincing, advocacy has long been considered “too hard to measure,” and so far relatively few advocates, funders, or evaluators have taken on the challenge. But this is now changing. Interest in advocacy evaluation is surging and cutting-edge ad
...
vocates are embracing evaluation as a critical part of their work. The main barrier preventing more organizations from using evaluation is a lack of familiarity with how to think about and design evaluations of advocacy efforts that are useful, manageable, and resource-efficient. Even knowing where to start can be a challenge. This tool was developed to help address that gap in knowledge. It guides users through four basic steps of advocacy evaluation planning." (Page 3)
more