"This toolkit is designed for development practitioners, government officials, academics, monitoring and evaluation specialists, and researchers who are interested in measuring empowerment. The learning objectives of this toolkit are: Users gain a general understanding of empowerment and empowerment
...
indices, the importance of measuring empowerment and how a framework can help to measure empowerment. Users understand the various dimensions of COL’s Three-Dimensional Empowerment Framework and how it aligns with the definition of empowerment. Users learn what Likert questions are and some best practices for developing them. Users are able to use the framework to develop a range of Likert questions that address the various facets of empowerment. Users are able to enter data, calculate the index scores and do basic preliminary/ descriptive analysis in Microsoft Excel." (Page 1)
more
"[...] the purpose of [this guide] is to support managers and commissioners of impact evaluations to better manage the entire process from drawing up terms of reference, selecting contractors, steering evaluations and utilising evaluation results. The guide also argues that relying only on tradition
...
al approaches to IE does not fit well with the kind of customised, complex, locally engaged and often sensitive programmes that non-governmental organisations (NGOs) and civil society organisations (CSOs) undertake. A broader range of designs and methods are needed." (Introduction, page 3)
more
"During the first half of 2013, the video4change Network worked with a small team of researchers to investigate and identify the opportunities, needs, challenges and barriers for measuring the impact of Video for Change initiatives. The primary goal of this research was to assess whether support was
...
needed to evaluate the impact of Video for Change initiatives and to identify impact evaluation experiences and success stories. This research process involved two core research activities: 1. A preliminary literature review was carried out to assess available literature relating to Video for Change and the measurement of impact. A draft literature review report was created and the video4change Network was invited to comment and discuss this literature review and take part in a follow-up Google Hangout discussion. 2. A series of interviews with Video for Change practitioners and organisations (8) and Video for Change project funders (2) were carried out and analysed. [...] In carrying out this preliminary literature review and interviews, we found that there were no comprehensive books, videos, reports or compendiums that detailed the historical development of video use for social change and nor did we locate an agreed, commonly used definition of what Video for Change is or means. We were also unable to identify any widely shared perspectives on how to monitor and evaluate the impact of Video for Change initiatives; indeed, tracing the historical development of Video for Change evaluation approaches, processes and practices was complicated, messy and far more difficult and complex than we might have imagined. We found that the video4change Network organisations that we interviewed all agreed that they wanted better and shared structures to help them conceive of and undertake impact evaluation, as well as to provide training/support for it. There was also a shared desire to learn from the evaluation experiences of others and to share experiences about designing for and understanding impact." (Page 1)
more
"A randomized controlled trial (RCT) is a way of doing impact evaluation in which the population receiving the programme or policy intervention is chosen at random from the eligible population, and a control group is also chosen at random from the same eligible population. It tests the extent to whi
...
ch specific, planned impacts are being achieved. In an RCT, the programme or policy is viewed as an ‘intervention’ in which a treatment – the elements of the programme/policy being evaluated – is tested for how well it achieves its objectives, as measured by a predetermined set of indicators. The strength of an RCT is that it provides a very powerful response to questions of causality, helping evaluators and programme implementers to know that what is being achieved is as a result of the intervention and not anything else [...] The distinguishing feature of an RCT is the random assignment of members of the population eligible for treatment to either one or more treatment groups (who receive the intervention treatment or variations of it) or to the control group (who receive either no intervention or the usual intervention, if the treatment is an innovation to an existing intervention). The effects on specific impact areas for the different groups are compared after set periods of time." (Page 1)
more
"This article summarizes evidence for technological advances associated with population-level behavior changes necessary to advance child survival and healthy development in children under 5 years of age in low- and middle-income countries. After a rigorous evidence selection process, the authors as
...
sessed science, technology, and innovation papers that used mHealth, social-transmedia, multiplatform media, health literacy, and devices for behavior changes supporting child survival and development. Because of an insufficient number of studies on health literacy and devices that supported causal attribution of interventions to outcomes, the review focused on mHealth, social-transmedia, and multiplatform media. Overall, this review found that some mHealth interventions have sufficient evidence to make topic-specific recommendations for broader implementation, scaling, and next research steps (e.g., adherence to HIV-AIDS antiretroviral therapy, uptake and demand of maternal health service, and compliance with malaria treatment guidelines). While some media evidence demonstrates effectiveness in changing cognitive abilities, knowledge, and attitudes, evidence is minimal on behavioral endpoints linked to child survival." (Abstract)
more
"The guide has been formatted to be useful to people at all levels of familiarity with media evaluation. The next section begins by presenting a core framework: four “building blocks” that distill a diverse set of evaluation frameworks into a straightforward process you can use to plan a new mea
...
surement effort. We follow with a discussion of each phase of the evaluation process, offering examples from the media world and tips for both basic and advanced measurement methods. There’s a lot to absorb, so to make it digestible and practical we have included planning worksheets at the end of each section along with a list of recommended resources: tools, how-to instructions, white papers, articles, and books you can turn to for more support in specific areas." (Page 3)
more
"This report seeks to answer the two-pronged question, “What is ‘impact,’ and how can it be measured consistently across nonprofit newsrooms?” A review of recent, relevant literature and our informal conversations with experts in the field reveal growing ambitions toward the goal of developi
...
ng a common framework for assessing journalism’s impact, yet few definitive conclusions about how exactly to reach that framework. This is especially the case when journalism’s “impact” is defined by its ultimate social outcomes — not merely the familiar metrics of audience reach and website traffic.
As with all journalism, the frame defines the story, and audience is all-important. Defining “impact” as a social outcome proves a complicated proposition that generally evolves according to the constituency attempting to define it. Because various stakeholders have their own reasons for wanting to measure the impact of news, understanding those interests is an essential step in crafting measurement tools and interpreting the metrics they produce. Limitations of impact assessment arise from several sources: the assumptions invariably made about the product and its outcome; the divergent and overlapping categories into which nonprofit journalism falls in the digital age; and the intractable problem of attempting to quantify “quality.”
These formidable challenges, though, don’t seem to deter people from posing and attempting to find answers to the impact question. Various models for assessing impact are continually being tinkered with, and lessons from similar efforts in other fields offer useful insight for this journalistic endeavor. And past research has pointed to specific needs and suggestions for ways to advance the effort. From all of this collective wisdom, several principles emerge as the cornerstones upon which to build a common framework for impact assessment." (Abstract)
more
"This tool is used to assess the SBCC knowledge and competence of individuals and how successful a specific training has been for them. Individuals about to take an SBCC training fill in the pre-assessment provided in this document, then fill in the post-assessment right after completing the trainin
...
g. The questions and items are the same for both assessments. Individuals completing a successful training and filling in the post-assessment will answer a higher number of questions correctly and rate their own SBCC competencies at increased levels. It is the comparison that matters. The difference between an individual’s scores and ratings in the two assessments allows the training to be monitored and measurably evaluated. The score indicates whether an individual increased his or her SBCC knowledge, skills, and competencies during the training, and the ratings can suggest whether further training or capacity strengthening is needed and in which specific SBCC competencies." (Introduction, page 4)
more
"Quality assurance (QA) systems applied in educational contexts are generally concerned with inputs — how much money is spent, what staffing, resources and support are provided, what kinds of teaching and learning are involved, and so on. There is an assumption — not always fulfilled — that th
...
e higher the standards of the inputs, the higher the quality of the outputs. In this toolkit, we propose a different approach: the evaluation of the programmes’ outcomes, outputs and impacts. We examine the differences between informal and self-directed learning, nonformal education and training (NFE) and formal education; provide examples of NFE programmes using a variety of face-to-face, distance education and technology-based teaching and learning methods; examine the approaches to QA that are required in NFE; consider the outputs, outcomes and impacts that can be achieved in NFE programmes; propose the adoption of a rigorous but simple-to-use QA framework which is based on outputs, outcomes and impacts." (Back cover)
more
"The European Commission has begun to integrate a political economy approach throughout the project cycle. This guide tailors the political economy approach to the media development sector specifically. Using a political economy media matrix (PEMM) as its basis, it explains how to integrate the PE a
...
pproach throughout the programme cycle." (Background, page 4)
more
"This first guidance note, Introduction to Impact Evaluation, provides an overview of impact evaluation, explaining how impact evaluation differs from – and complements – other types of evaluation, why impact evaluation should be done, when and by whom. It describes different methods, approaches
...
and designs that can be used for the different aspects of impact evaluation: clarifying values for the evaluation, developing a theory of how the intervention is understood to work, measuring or describing impacts and other important variables, explaining why impacts have occurred, synthesizing results, and reporting and supporting use. The note discusses what is considered good impact evaluation – evaluation that achieves a balance between the competing imperatives of being useful, rigorous, ethical and practical – and how to achieve this. Footnotes throughout the document contain references for further reading in specific areas." (Introduction, page 1)
more
"Since impact evaluations focus on results and what contributes to results, logically there should be a greater enthusiasm for making use of the findings. Impact evaluations also tend to cost more than other types of evaluation, implying that they should command significant follow up as well. It tur
...
ns out, however, that credible evidence and significant costs are not sufficient to ensure that impact evaluation findings are used. This guidance note looks to several additional factors that contribute to effective evaluation use. Using impact evaluations depends importantly on how one designs and conducts them. As was illustrated in Guidance Note 3 on mixed methods, the choice of evaluation method should follow from the questions one is trying to answer. But whatever questions are being tackled, whatever evaluation methods are employed, to make the best and proper use of evaluation findings we can apply a common set of practices and insights." (Introduction)
more
"Mixed methods (MM) evaluations seek to integrate social science disciplines with predominantly quantitative (QUANT) and predominantly qualitative (QUAL) approaches to theory, data collection, data analysis and interpretation. The purpose is to strengthen the reliability of data, validity of the fin
...
dings and recommendations, and to broaden and deepen our understanding of the processes through which program outcomes and impacts are achieved, and how these are affected by the context within which the program is implemented. While mixed methods are now widely used in program evaluation, and evaluation RFPs frequently require their use, many evaluators do not utilize the full potential of the MM approach."
more
"The Nonprofit Outcomes Toolbox identifies stages in the use of outcomes and shows you how to use specific facets of existing outcome models to improve performance and achieve meaningful results. Going beyond the familiar limits of the sector, this volume also illustrates how tools and approaches lo
...
ng in use in the corporate sector can be of great analytical and practical use to nonprofit, philanthropic, and governmental organizations." (Publisher description)
more
"This book offers an accessible introduction to the topic of impact evaluation and its practice in development. Although the book is geared principally toward development practitioners and policy makers, we trust that it will be a valuable resource for students and others interested in impact evalua
...
tion. Prospective impact evaluations assess whether or not a program has achieved its intended results or test alternative strategies for achieving those results. We consider that more and better impact evaluations will help strengthen the evidence base for development policies and programs around the world [...] The three parts in this handbook provide a nontechnical introduction to impact evaluations, discussing what to evaluate and why in part 1; how to evaluate in part 2; and how to implement an evaluation in part 3. These elements are the basic tools needed to successfully carry out an impact evaluation." (Preface)
more