"Sections of the book engage in critical reflection on what peacebuilding effectiveness is and who gets to decide, provide practical examples and case studies of the successes and failures of assessing peacebuilding work, and support innovative strategies and tools to move the field forward. Chapter
...
s reflect a variety of perspectives on peacebuilding effectiveness and methods—quantitative, qualitative, and participatory—to evaluate peacebuilding efforts, with particular attention to approaches that center those local to the peacebuilding process. Practitioners and policymakers alike will find useful arguments and approaches for evaluating peacebuilding activities and making the case for funding such efforts." (Publisher description)
more
"Evaluations have two key functions: lesson learning and accountability. How well they can fill these tasks depends on the suitability of the evaluation design to address the evaluation questions of interest, and the quality of those evaluations. Unfortunately, many evaluations suffer from flaws whi
...
ch reduce the confidence we can have in their findings, and their usefulness for both lesson learning and accountability. This blog lists 10 common flaws that I commonly come across. Not all evaluations have these flaws. There are many excellent evaluations. But these flaws are sufficiently common to deserve drawing attention to." (Introduction)
more
"From the founder of empowerment evaluation (EE), a framework uniquely suited to advancing social justice causes, this book explains the theories, principles, and steps of conducting EE from scratch or within a preexisting evaluation or work plan. David M. Fetterman describes how EE enables program
...
planners and participants to define their mission or purpose, take stock of how well they are doing, and plan for the future to achieve self-determined goals. EEs of two large programs (Feeding America and USAID/REACH) are discussed in depth; other EE case examples address such topics as raising test scores in impoverished and rural schools and bridging the digital divide in communities of color. User-friendly features include chapters on conducting EE remotely and frequently asked questions, as well as illuminating sidebars and glossaries of acronyms and concepts/terms." (Publisher description)
more
"The Footprint Evaluation Initiative aims to ensure that all evaluations consider environmental sustainability, regardless of whether this is an explicit objective of the project, policy or program being evaluated. This report describes four ‘thought experiments’ undertaken as part of this proje
...
ct. The thought experiments explored whether it is relevant, feasible and useful to consider environmental sustainability in evaluation, how this might be done, what challenges and issues it raises, and what is needed to address these. This report aims to document and share what we learned during this process, provide concrete examples of how environmental sustainability might be considered in an evaluation, and share details of our thought experiment process that others might find helpful." (Summary)
more
"The purpose of this review is to support education practitioners, host country government representatives, donors, implementers, non-governmental organizations (NGOs), civil society organizations, and other stakeholders in applying best practices to monitor and evaluate distance learning initiative
...
s designed for diverse learners and implemented both within and outside of learning institutions. This review covers the four key distance learning modalities: radio/audio, television/video, mobile phone, and online learning. Printed texts, which are often developed to accompany these first four modalities, can also be a fifth modality in contexts where technology is not used. Most of the data sources were drawn from work in the primary education sub-sector. However, much of the guidance can be applied to secondary and tertiary-level distance learning. This review is also applicable to data collection in both crisis and non-crisis contexts. This review presents a roadmap that guides users through four steps of planning and designing how distance learning delivered through any of these modalities can be monitored and evaluated. Step 1: Determine the Objectives of Monitoring and Evaluating Distance Learning; Step 2: Determine What Will Be Measured (Reach, Engagement, and Outcomes); Step 3: Determine How Data Will Be Collected (In-Person or Remotely); Step 4: Determine the Methods and Approaches for Measurement. Based on emerging global evidence, this review guides users through the process of measuring the reach, engagement, and outcomes of distance learning initiatives. In addition to providing step-by-step guidance, this review provides three overarching recommendations for developing and implementing evidence-based monitoring, evaluation, and learning (MEL) plans for distance learning initiatives." (Executive summary)
more
"This paper presents comparative learning from the evaluation of six international development initiatives that applied various forms of Process Tracing. While these initiatives span across diverse contexts and pursued different aims, they are connected by a common thread: all six case studies centr
...
e around efforts to influence others - often decision makers and those in power - around aspects such as practices of consultation and inclusion; public policy; and resource allocation. The paper is organized in the following manner. We first explain Process Tracing and review common definitions. Secondly, we consider the potential value added of an explicitly Bayesian approach to Process Tracing. Next, we discuss the six cases where Process Tracing was applied, noting similarities and differences. Then, we explore key practical learning emerging from the cases and insights from the use of different forms of Process Tracing across different programming contexts. These reflections are organized under four meta-themes of participation, Theory of Change, methodological decisions, and mitigating bias. Finally, we present our key recommendations, ending with practical tips, targeted at practitioners and evaluators interested in applying Process Tracing, especially for initiatives falling under the ‘influencing’ umbrella." (Introduction)
more
"The evaluation stories revealed in the project illustrate the value of a positive approach. They emphasize the evidence of what works and/or might work and is worthy of being continued, enlarged or modified. This can be contrasted with evaluations that are focused mainly on the technical problems a
...
nd deficiencies present in most interventions. This type of approach has been shown to be highly relevant in getting beyond defensive and suspicious attitudes, and instead promoting a constructive focus on possible solutions. For example, the evaluation of the initiative Strengthening the Abilities of Indigenous Women to Set up and Have a Bearing on the Implementation of Public Policies (Colombia) validated many of the approaches adopted, and indicated the potential that could be developed by having an additional training to the objectives outlined originally in the project. In addition to that, in evaluations with a positive approach, evaluators usually develop a close relationship with the actors of the intervention, understanding them and supporting them. The evaluators' task in these cases is not limited to indicating what to be modified. Many times local actors regard this as evaluators demonstrating ‘commitment’ to the project and its future. Participation of the actors in the collection and use of the evaluation data is a powerful way of including users and beneficiaries. This allows participants to get involved and understand the data better. An evaluation characterised by a collaborative approach leads participants to take responsibility for the evaluation and then for the change and transformation that follows. In this way, active participation in the evaluation process helps to develop better understanding of evaluation and contributes to commitment and use. This is illustrated by the participatory evaluation in Costa Rica, in which regional technical teams were involved and deeply interested in getting to know how the evaluated program worked in their area. In contrast, the higher authorities limited their participation to approving the evaluation. In this way, recommendations at regional and local levels were applied soon after the evaluation finished, whereas general recommendations —dependent upon the higher authorities— have not yet been applied." (Introduction, page 21-22)
more
"This technical report introduces a set of evidence-based principles to guide evaluation practice in contexts where evaluation knowledge is collaboratively produced by evaluators and stakeholders. The data from this study evolved in four phases: two pilot phases exploring the desirability of develop
...
ing a set of principles, an online questionnaire survey that drew on the expertise of practicing evaluators to identify dimensions, factors or characteristics that enhance or impede success in collaborative approaches, and finally a validation phase. The principles introduced here stem from the experiences of 320 evaluators who have engaged in collaborative approaches in a wide variety of evaluation settings and the lessons they have learned. We expect the principles to evolve over time, as evaluators learn more about collaborative approaches in context. With this in mind, we pose questions for consideration to stimulate further inquiry." (Abstract)
more
"[...] Amongst those agencies the following OECD DAC definition of evaluability is widely accepted and has been applied within this report: “The extent to which an activity or project can be evaluated in a reliable and credible fashion”.
Eighteen recommendations about the use of Evaluability Ass
...
essments are presented here, based on the synthesis of the literature in the main body of the report. The report is supported by annexes, which include an outline structure for Terms of Reference for an Evaluability Assessment.
An Evaluability Assessment should examine evaluability: (a) in principle, given the nature of the project design, and (b) in practice, given data availability to carry out an evaluation and the systems able to provide it. In addition it should examine the likely usefulness of an evaluation. Results of an Evaluability Assessment should have consequences: for the design of an evaluation, the design of an M&E Framework, or the design of the project itself. An Evaluability Assessment should not be confused with an evaluation (which should deliver the evaluative judgements about project achievements).
Many problems of evaluability have their origins in weak project design. Some of these can be addressed by engagement of evaluators at the design stage, through evaluability checks or otherwise. However project design problems are also likely to emerge during implementation, for multiple reasons. An Evaluability Assessment during implementation should include attention to project design and it should be recognised that this may lead to a necessary re-working of the intervention logic." (Executive summary, page 1)
more
"Ziel des Buches ist es, durch die systematische Verknüpfung des Gegenstandsbereichs verschiedener Analyseansätze einen ganzheitlichen und theoretisch fundierten Bewertungsrahmen sowie einen methodisch ausgearbeiteten Verfahrensvorschlag für die Ex-ante-Evaluation von Programmen zu vermitteln. Im
...
ersten Teil des Buches werden zunächst die Ziele definiert, die Ex-ante-Evaluationen erfüllen sollen. Auf Grundlage dieser Zielsetzungen werden die zentralen Analysedimensionen identifiziert, die im Rahmen einer Ex-ante-Evaluation zu berücksichtigen sind. Diese Analysedimensionen werden anschließend in einem umfassenden Bewertungsrahmen zusammengefasst. Im zweiten Teil der Arbeit werden auf Grundlage eines Szenario-Ansatzes verschiedene Instrumente aus dem Bereich der Technikfolgen-Abschätzung vorgestellt und mittels eines hypothetischen Beispiels aus der Evaluationspraxis auf ihre Anwendbarkeit im Rahmen von Programmplanungsprozessen diskutiert." (Verlagsbeschreibung)
more
"There has been a great deal written on why peace operations succeed or fail [...] But how are those judgments reached? By what criteria is success defined? Success for whom? Paul Diehl and Daniel Druckman explore the complexities of evaluating peace operation outcomes, providing an original, detail
...
ed framework for assessment. The authors address both the theoretical and the policy-relevant aspects of evaluation as they cover the full gamut of mission goals—from conflict mitigation, containment, and settlement to the promotion of democracy and human rights. Numerous examples from specific peace operations illustrate their discussion." (Publisher description)
more
"Die Entwicklungszusammenarbeit (EZ) ist in Deutschland das Politikfeld mit der längsten Evaluationstradition. Alle größeren deutschen EZ-Organisationen nutzen das Instrument der Evaluation – quantitativ und qualitativ jedoch in sehr unterschiedlichem Maße. Diese zwei Bände umfassende Studie
...
im Auftrag des Bundesministeriums für wirtschaftliche Zusammenarbeit und Entwicklung (BMZ) untersucht mit erheblichem methodischen Aufwand, wie die einzelnen EZ-Organisationen evaluieren, was sie über die Wirkungen der von ihnen geförderten Projekte und Programme wissen und ob und wie sich die Evaluationssysteme der einzelnen Organisationen zu einem sinnvollen Ganzen zusammenfügen. Eine vergleichbare Studie liegt bisher weder über ein anderes Politikfeld in Deutschland noch über die Evaluation in der Entwicklungszusammenarbeit in einem anderen Land Europas vor." (Verlagsbeschreibung)
more
"Evaluation als Instrument zur systematischen und transparenten Bewertung von Projekten, Massnahmen, Programmen, Gesetzen und anderen Gegenständen hat in den letzten zwei Dekaden in Kontinentaleuropa stark an Bedeutung gewonnen. Evaluationstätigkeiten werden auf der Angebots- und Nachfrageseite pr
...
ofessionalisiert. Die Gründung entsprechender Fachgesellschaften, die Schaffung spezifischer Aus- und Weiterbildungsangebote und die Etablierung fachlicher Standards belegen dies. Dieser Sammelband spiegelt Entwicklungsstand und Leistungsprofil der Evaluation in Deutschland, Österreich und der Schweiz wider. Namhafte, mit der jeweiligen Landessituation vertraute Autorinnen und Autoren leisten Beiträge zu zehn Themenfeldern: Agrarpolitik, Arbeitsmarktpolitik, Bildung, Energie- und Umweltpolitik, Entwicklungszusammenarbeit, Forschung und Technologie, Gesundheit, institutionelle Politik, Raumentwicklungspolitik und Soziale Arbeit. Ländervergleichende Beiträge arbeiten Gemeinsamkeiten und Unterschiede themenspezifisch heraus. Ergänzt werden diese vierzig Beiträge um Querschnittsbeiträge zur Institutionalisierung und zur Nutzung von Evaluation in den drei Ländern. "Expansion, Vielfalt und Divergenz der Evaluation" lautet die Quintessenz des übergreifenden Themenvergleichs im abschliessenden Beitrag." (Verlagsbeschreibung)
more