"In this report, we look at the state of digital data for development and emerging trends. We aim to support German development cooperation in integrating and prioritising data approaches and investments in their work. In this study we focus on four data categories: big data, open data, citizen-gene
...
rated data and real-time data. The selection of these categories considered two key dimensions: (1) the growing use in development-related policy discussions, and (2) the ability to capture key characteristics of interest, including size, access, source, and timeliness of data. We believe these categories provide a good starting point to explore how digital data production and use might lead to better development outcomes." (Executive summary)
more
"Patrick Rössler führt Schritt für Schritt und anhand einer Beispielstudie in das Handwerk der quantitativ-standardisierten Inhaltsanalyse ein. Ein Muster-Codebuch mit entsprechenden Kategorien dient dabei als Prototyp für eigene Untersuchungen. Die dritte Auflage wurde grundlegend überarbeitet
...
und um Kapitel zur automatisierten Inhaltsanalyse sowie zur Forschungsethik erweitert. Das im Buch genannte Codebuch ist im Online-Shop unter Zusatzmaterial verfügbar." (Verlagsbeschreibung)
more
"At the W.K. Kellogg Foundation, we believe that evaluation is an effective management tool to both inform strategy development and track the progress and impact of strategy implementation. We have long been committed to supporting our grantees’ ability to derive and share lessons learned from the
...
ir work. To that end, the foundation published the first Evaluation Handbook (The Kellogg Foundation, 1998) almost two decades ago to guide evaluation for our grantees. Since that time, as the discipline of evaluation grew and expanded, the demand for evaluation has risen. More and more nonprofit leaders and practitioners strive to design evidence-based programs, and more and more funders require their grantees to provide evidence to demonstrate the success of their funded work. The democratization of evaluation makes it necessary that evaluation is both rigorous and practical. How to achieve the balance motivated us to update the handbook. Over the years, the foundation has learned a lot from our grantees about the challenges of evaluation. This handbook is our continuous effort to demystify evaluation and facilitate its use, for the foundation’s grantees and for all organizations committed to learning and strengthening their work. It is designed for people who have little to no exposure to formal evaluation training and provides a starting point for them as they consider evaluating their work. It is intended to help them become more informed consumers of evaluation. Evaluations can be simple or extensive depending on the scope and complexity of the work being evaluated. The scope of the evaluation could potentially include a single program, a multi-site initiative, or a multifaceted strategy aimed at systems and community change. Regardless of the complexity of the effort, the basics for evaluating it remain the same and this handbook was written to impart information about these basics." (Foreword)
more
"In July 2015, Internews launched Open Mic Nepal, a project designed to track and debunk rumors in the earthquake-afected communities. Based on previous pilots of this approach in Gaza and Liberia, the project set out to assess and address information needs by using minimally structured qualitative
...
data-gathering approaches to surface trends in community conversations, identify key concerns, misunderstandings and toxic/corrupted information, and to redress them with the provision of reliable and verified information as speedily as possible." (Page 2)
more
"The Handbook of Practical Program Evaluation provides tools for managers and evaluators to address questions about the performance of public and nonprofit programs. Neatly integrating authoritative, high-level information with practicality and readability, this guide gives you the tools and process
...
es you need to analyze your program's operations and outcomes more accurately. This new fourth edition has been thoroughly updated and revised, with new coverage of the latest evaluation methods, including: culturally responsive evaluation; adopting designs and tools to evaluate multi-service community change programs; using role playing to collect data; using cognitive interviewing to pre-test surveys; coding qualitative data. You'll discover robust analysis methods that produce a more accurate picture of program results, and learn how to trace causality back to the source to see how much of the outcome can be directly attributed to the program. Written by award-winning experts at the top of the field, this book also contains contributions from the leading evaluation authorities among academics and practitioners to provide the most comprehensive, up-to-date reference on the topic. Valid and reliable data constitute the bedrock of accurate analysis, and since funding relies more heavily on program analysis than ever before, you cannot afford to rely on weak or outdated methods. This book gives you expert insight and leading edge tools that help you paint a more accurate picture of your program's processes and results, including: obtaining valid, reliable, and credible performance data; engaging and working with stakeholders to design valuable evaluations and performance monitoring systems; assessing program outcomes and tracing desired outcomes to program activities; providing robust analyses of both quantitative and qualitative data" (Publisher description)
more
"Data quality is a cornerstone of accountability in program reporting. In the international development sector, although we are often focused on reporting, ensuring the quality of the data that we report is critical for our partners, our donors, and our beneficiaries. In addition, Data Quality Manag
...
ement Plans and Routine Data Quality Assessments are both important elements of Pact’s Results and Measurement Standards. The intent of this manual is to provide guidance on how to ensure excellent data quality in all our programming. A slide set accompanying the module provides an opportunity to engage in practical exercises to test the skills outlined in this text." (Foreword)
more
"The book begins with an overview of the evaluation field and program evaluation standards, and proceeds to cover the most widely used evaluation approaches. With new evaluation designs and the inclusion of the latest literature from the field, this second edition is an essential update for professi
...
onals and students who want to stay current. Understanding and choosing evaluation approaches is critical to many professions, and Evaluation Theory, Models, and Applications, Second Edition is the benchmark evaluation guide. Authors Daniel L. Stufflebeam and Chris L. S. Coryn, widely considered experts in the evaluation field, introduce and describe 23 program evaluation approaches, including, new to this edition, transformative evaluation, participatory evaluation, consumer feedback, and meta-analysis. Evaluation Theory, Models, and Applications, Second Edition facilitates the process of planning, conducting, and assessing program evaluations. The highlighted evaluation approaches include: Experimental and quasi-experimental design evaluations Daniel L. Stufflebeam's CIPP Model Michael Scriven's Consumer-Oriented Evaluation Michael Patton's Utilization-Focused Evaluation Robert Stake's Responsive/Stakeholder-Centered Evaluation Case Study Evaluation Key readings listed at the end of each direct readers to the most important references for each topic. Learning objectives, review questions, student exercises, and instructor support materials complete the collection of tools. Choosing from evaluation approaches can be an overwhelming process, but Evaluation Theory, Models, and Applications, Second Edition updates the core evaluation concepts with the latest research, making this complex field accessible in just one book." (Publisher description)
more
"This book off ers an accessible introduction to the topic of impact evaluation and its practice in development. Although the book is geared principally toward development practitioners and policy makers, we trust that it will be a valuable resource for students and others interested in impact evalu
...
ation. Prospective impact evaluations assess whether or not a program has achieved its intended results or test alternative strategies for achieving those results. We consider that more and better impact evaluations will help strengthen the evidence base for development policies and programs around the world. Our hope is that if governments and development practitioners can make policy decisions based on evidence—including evidence generated through impact evaluation—development resources will be spent more eff ectively to reduce poverty and improve people’s lives. The three parts in this handbook provide a nontechnical introduction to impact evaluations, discussing what to evaluate and why in part 1; how to evaluate in part 2; and how to implement an evaluation in part 3. These elements are the basic tools needed to successfully carry out an impact evaluation. The approach to impact evaluation in this book is largely intuitive, and we attempt to minimize technical notation. We provide the reader with a core set of impact evaluation tools—the concepts and methods that underpin any impact evaluation—and discuss their application to real-world development operations. The methods are drawn directly from applied research in the social sciences and share many commonalities with research methods used in the natural sciences. In this sense, impact evaluation brings the empirical research tools widely used in economics and other social sciences together with the operational and political-economy realities of policy implementation and development practice.
From a methodological standpoint, our approach to impact evaluation is largely pragmatic: we think that the most appropriate methods should be identified to fit the operational context, and not the other way around. This is best achieved at the outset of a program, through the design of prospective impact evaluations that are built into the project’s implementation. We argue that gaining consensus among key stakeholders and identifying an evaluation design that fits the political and operational context are as important as the method itself. We also believe strongly that impact evaluations should be candid about their limitations and caveats. Finally, we strongly encourage policy makers and program managers to consider impact evaluations in a logical framework that clearly sets out the causal pathways by which a program works to produce outputs and influence final outcomes, and to combine impact evaluations with monitoring and complementary evaluation approaches to gain a full picture of performance." (Preface)
more
"The research techniques and methods discussed are applied to researching advertising, mass-media audiences, mass-media efficiency and organisational and development contexts. The research problems or issues addressed are also relevant to other communication fields, including political, government,
...
marketing, intercultural, health and interpersonal and small-group communication, plus information and communications technology. This second edition elaborates on the application of additional measurement scales and of content analysis. It contains more practical examples of the application of scientific criteria and it includes additional marginal notes that facilitate the comprehension of key concepts." (Publisher description)
more
"This is a practical and well structured manual aiming to use self-evaluation for organisational learning. The book consists of four parts. "The evaluation context" introduces the role of monitoring, evaluation and impact assessment as part of the project cycle logic. "The evaluation process" descri
...
bes steps to be taken in designing and implementing an evaluation. The third part, "evaluation tools", gives a practical insight to major evaluation methods like SWOT analysis, questionnaires (and their design), focus groups or case studies. The fourth and main part provides evaluation guidelines for training courses, newsletters, websites, small libraries and resource centres, online communities, rural radios, databases and selective dissemination of information services." (CAMECO Update 1-2011)
more
"This book provides a detailed introduction to the process of developing monitoring and evaluation systems which will provide a foundation on which to develop personal and organisational learning. It is based on INTRAC’s research, consultancy and training work and is rooted in real experiences. Th
...
e guide will enable people who have limited experience of M&E to grasp its core concepts, and to apply them to the development of an M&E system. The guide is sufficiently concise to be read from cover to cover and will equally function as a reference work. Reference sections provide information about different approaches, tools and methodologies which will enable readers to fine-tune their approach to their particular set of needs and circumstances." (Publisher description)
more
"This book is the most comprehensive and accessible short guide to evaluations available. It explains clearly what evaluations are, how they can be used most effectively, and outlines the strengths and pitfalls of different evaluation methods. Each chapter comes with tasks to demonstrate the practic
...
al importance of the issues covered and to lead the reader through the steps necessary to carry out a successful evaluation [...] Stakeholder models are compared and contrasted with other models of involvement, such as participatory evaluation and practitioner-centred action research. Ethical and political considerations are placed in context. Designs for different purposes are systematically considered [...] The book is aimed at anyone who is faced with the task of doing small-scale evaluations for the first time, wether or not they have a professional background in the field." (Cover)
more
"A highly practical guide for development workers which aims to help them evaluate and monitor their work in a systematic way. It covers the whole process of assessment, monitoring, review and evaluation of development programmes." (Catalogue Intermediate Technology Publications 2000)