"This publication is a diagnostic tool for a "health check" or assessment of the sustainability of community radio stations. The check includes five areas: mission and governance of a community radio station; management and staffing; infrastructure and finances; programming; and community involvemen
...
t. Core aspects of these five areas are succinctly described in the first part of "The healthy community radio station". The second part provides checklists of ten factual questions (which require a simple yes/no answer) and two qualitative questions for each of the five sectors, and proposes combining the "checklist scores" and "judgement scores" to get an overall picture of the "health" of a given radio station. Also included are guidelines for focus group discussions. This publication is highly recommended, both because of its convincing concept of five "pillars" of community radio sustainability and its practical approach." (CAMECO Update November 2013)
more
"This book has been written largely as a result of the success of the Smart Toolkit for Evaluating Information Projects, Products and Services (2010) and in response to suggestions made by stakeholders to produce a compact primer. For many, the Smart Toolkit represents a major work that, despite its
...
simplicity, at first glance appears overwhelming and daunting to use. It is envisaged that this book will provide a short overview of the Smart Toolkit for policy- and decisionmakers and managers (also known as enablers) in agricultural and rural development institutions in Africa, the Caribbean and Pacific countries, as well as act as a quick reference for users." (Foreword, page 5)
more
"This paper draws on some preliminary findings from a comparative thematic analysis of 47 evaluation documents of media assistance programs. The aim of this analysis is to identify trends in impact evaluation practice in the media assistance field, as well as the strengths and weaknesses of differen
...
t evaluation approaches. This paper presents four types of social change claims commonly presented in reports; hypothetical changes, introduction of new opportunities, concrete examples of immediate impacts, and analysis of ongoing social and political changes. Although these types may appear as a spectrum from weak to strong, the interactions are perhaps more accurately understood using metaphors such as building blocks. This paper explores these types in more detail and suggests that a robust set of impacts-types could be useful in developing more grounded theories of change and indicators." (Abstract)
more
"A comprehensive, practical guide to the on-the-ground tasks of evaluating and monitoring democracy assistance programs, from planning and implementation to preparing and presenting evaluation reports." (Publisher description)
"Overall, this paper finds that peacebuilding practitioners have much to learn in terms of new media use from their humanitarian and development colleagues. New media is increasingly and often successfully used for needs assessments and beneficiary communication but only very few examples exist in w
...
hich new media are applied for (improving) monitoring and evaluation in peacebuilding contexts. In a sense, therefore, peacebuilders are latecomers to an action trend that is far more advanced in other fields." (Page 2)
more
"The guide has been formatted to be useful to people at all levels of familiarity with media evaluation. The next section begins by presenting a core framework: four “building blocks” that distill a diverse set of evaluation frameworks into a straightforward process you can use to plan a new mea
...
surement effort. We follow with a discussion of each phase of the evaluation process, offering examples from the media world and tips for both basic and advanced measurement methods. There’s a lot to absorb, so to make it digestible and practical we have included planning worksheets at the end of each section along with a list of recommended resources: tools, how-to instructions, white papers, articles, and books you can turn to for more support in specific areas." (Page 3)
more
"There are various aspects of evaluation reporting that can affect how information is used. Stakeholder needs, the evaluation purpose, and target audience should be considered when communicating results. Evaluation reporting should not only identify what, when, how, and to what extent information sh
...
ould be shared but take into account how information might be received and used. In a 2006 survey of American Evaluation Association members, 68% self-reported that their evaluation results were not used. Findings such as this suggest a greater need for evaluation results to make it off the bookshelf and into the hands of intended audiences. Similarly in the CDC Framework for Program Evaluation, the “utility evaluation standard” charges evaluators to carry out evaluations that lead to actionable findings for intended users. This commitment to conducting evaluations that improve the lives of participants serves as the inspiration for this guide." (Introduction)
more
"The goal of this document is to improve the effectiveness of DFID programmes and the measurement of their impacts by providing DFID Advisers with the practical skills to develop high quality theories of change, to understand the role they play in programme design and assessment. It is intended for
...
DFID advisors to more clearly and explicitly articulate their theories of change as a means of improving the effectiveness of interventions. Part I first explores the fundamentals of theories of change: what they are, why they are important, and how to create a theory of change. It explores theories of change at different levels, and concludes with advice on how theories of change can enhance the effectiveness and relevance of programming. Part II continues to build upon Part I by focusing on how theories of change can be used in the monitoring and evaluation stages of the project cycle. It provides practical guidance on how and why to use theories of change-focused monitoring and evaluation strategies, particularly exploring the ways in which theories of change can be included in any evaluation approach." (Document summary, page 3)
more
"This report seeks to answer the two-pronged question, “What is ‘impact,’ and how can it be measured consistently across nonprofit newsrooms?” A review of recent, relevant literature and our informal conversations with experts in the field reveal growing ambitions toward the goal of developi
...
ng a common framework for assessing journalism’s impact, yet few definitive conclusions about how exactly to reach that framework. This is especially the case when journalism’s “impact” is defined by its ultimate social outcomes — not merely the familiar metrics of audience reach and website traffic.
As with all journalism, the frame defines the story, and audience is all-important. Defining “impact” as a social outcome proves a complicated proposition that generally evolves according to the constituency attempting to define it. Because various stakeholders have their own reasons for wanting to measure the impact of news, understanding those interests is an essential step in crafting measurement tools and interpreting the metrics they produce. Limitations of impact assessment arise from several sources: the assumptions invariably made about the product and its outcome; the divergent and overlapping categories into which nonprofit journalism falls in the digital age; and the intractable problem of attempting to quantify “quality.”
These formidable challenges, though, don’t seem to deter people from posing and attempting to find answers to the impact question. Various models for assessing impact are continually being tinkered with, and lessons from similar efforts in other fields offer useful insight for this journalistic endeavor. And past research has pointed to specific needs and suggestions for ways to advance the effort. From all of this collective wisdom, several principles emerge as the cornerstones upon which to build a common framework for impact assessment." (Abstract)
more
"[...] Amongst those agencies the following OECD DAC definition of evaluability is widely accepted and has been applied within this report: “The extent to which an activity or project can be evaluated in a reliable and credible fashion”.
Eighteen recommendations about the use of Evaluability Ass
...
essments are presented here, based on the synthesis of the literature in the main body of the report. The report is supported by annexes, which include an outline structure for Terms of Reference for an Evaluability Assessment.
An Evaluability Assessment should examine evaluability: (a) in principle, given the nature of the project design, and (b) in practice, given data availability to carry out an evaluation and the systems able to provide it. In addition it should examine the likely usefulness of an evaluation. Results of an Evaluability Assessment should have consequences: for the design of an evaluation, the design of an M&E Framework, or the design of the project itself. An Evaluability Assessment should not be confused with an evaluation (which should deliver the evaluative judgements about project achievements).
Many problems of evaluability have their origins in weak project design. Some of these can be addressed by engagement of evaluators at the design stage, through evaluability checks or otherwise. However project design problems are also likely to emerge during implementation, for multiple reasons. An Evaluability Assessment during implementation should include attention to project design and it should be recognised that this may lead to a necessary re-working of the intervention logic." (Executive summary, page 1)
more
"Evaluating Communication for Development presents a comprehensive framework for evaluating communication for development (C4D). This framework combines the latest thinking from a number of fields in new ways. It critiques dominant instrumental, accountability-based approaches to development and eva
...
luation and offers an alternative holistic, participatory, mixed methods approach based on systems and complexity thinking and other key concepts. It maintains a focus on power, gender and other differences and social norms. The authors have designed the framework as a way to focus on achieving sustainable social change and to continually improve and develop C4D initiatives. The benefits and rigour of this approach are supported by examples and case studies from a number of action research and evaluation capacity development projects undertaken by the authors over the past fifteen years. Building on current arguments within the fields of C4D and development, the authors reinforce the case for effective communication being a central and vital component of participatory forms of development, something that needs to be appreciated by decision makers. They also consider ways of increasing the effectiveness of evaluation capacity development from grassroots to management level in the development context, an issue of growing importance to improving the quality, effectiveness and utilisation of monitoring and evaluation studies in this field. The book includes a critical review of the key approaches, methodologies and methods that are considered effective for planning evaluation, assessing the outcomes of C4D, and engaging in continuous learning." (Publisher description)
more
"Utilization Focused Evaluation (UFE) facilitates a learning process in which people in the real world apply evaluation findings and experiences to their work. The focus is on intended users. UFE does not prescribe any specific content, method, or theory. It is a guiding framework, rather than a met
...
hodology. UFE can include a wide variety of evaluation methods within an overall participatory paradigm. Decision making, in consultation with those who can benefit from the evaluation, is an important part of the process. Intended users will more likely utilize an evaluation in which they have ownership. This Primer is for practitioner evaluators and project implementers who have heard of UFE and are keen to test-drive the approach." (Lead)
more
"This paper focuses attention on randomised field experiments in the developing world that explicitly address the use of media to achieve democracy and governance (DG) outcomes [...] The first section describes the growth of DG assistance and the increasing interest in DG field experiments. The seco
...
nd section describes the domain of media assistance targeted at DG outcomes. The third provides an overview of current experimental and quasi-experimental studies on the intersection between media, democracy and governance. The fourth and fifth sections describe some of the challenges to successfully employing field experiments to inform media assistance programmes. The final section concludes by arguing for the benefits of practitioner-academic collaborations that provide experimental evidence about the influences on as well as the effects of media content related to democracy and governance." (Page 4)
more
"Liliana Rodríguez-Campos and Rigoberto Rincones-Gómez present their Model for Collaborative Evaluations (MCE) with its six major components: identify the situation, clarify the expectations, establish a collective commitment, ensure open communication, encourage effective practices, and follow sp
...
ecific guidelines. Fully updated to reflect the state-of-the-art in the field, each core chapter addresses one component of the model, providing step-by-step guidance, as well as helpful tips for successful application. To further demonstrate the utility of the MCE, this new edition includes recurring vignettes about several evaluators and clients, illustrating frequent questions and specific challenges that arise when evaluators take a collaborative approach. Drawing on a wide range of collaborative evaluations conducted in the business, nonprofit, and education sectors, this precise and easy-to-understand guide is ideal for students and practitioners who want to use its tools immediately." (Publisher description)
more
"In this fully revised and expanded Fourth Edition, Ronald E. Rice and Charles K. Atkin provide readers with a comprehensive, up-to-date look into the field of public communication campaigns. Updated to reflect the latest theories and research, this text extends coverage to new areas, including sun
...
protection, organ donation, human rights, social norms, corporate social responsibility, condom use, ocean sustainability, fear messages, and digital games. Classic chapters include updates on topics such as campaign history, theoretical foundations, formative evaluation, systems approaches, input-output persuasion matrix, design and evaluation, meta-analysis, and sense-making methodology." (Publisher description)
more
"Der Artikel beabsichtigt, den derzeitigen Diskussionsstand zu Theorie und Praxis der Gruppendiskussion bezogen auf das Feld der Evaluation zusammenzutassen, die Stärken und Grenzen von Gruppendiskussionen in evaluativen Vorhaben zu skizzieren und eine weitere Fundierung der Gruppendiskussion in de
...
r Evaluation anzuregen. Nach einer definitorischen Klärung verwandter Begrifflichkeiten wie Gruppendiskussion, Focus Group und Gruppeninterview wird ein Überblick über die Entwicklung und Rezeption der Methode in der Literatur der Sozialforschung und Evaluation gegeben. Es wird herausgearbeitet, was Evaluation als Disziplin im Vergleich zur Sozialforschung allgemein ausmacht und welche besonderen Leistungspotenziale sich hieraus für die Gruppendiskussion in der Evaluation ergeben. Zwei idealtypische Formen der Gruppendiskussion werden kontrastiert und zentrale Gestaltungsparameter bei der Planung, Durchführung und Auswertung vorgestellt und diskutiert." (Zusammenfassung)
more
"This tool is used to assess the SBCC knowledge and competence of individuals and how successful a specific training has been for them. Individuals about to take an SBCC training fill in the pre-assessment provided in this document, then fill in the post-assessment right after completing the trainin
...
g. The questions and items are the same for both assessments. Individuals completing a successful training and filling in the post-assessment will answer a higher number of questions correctly and rate their own SBCC competencies at increased levels. It is the comparison that matters. The difference between an individual’s scores and ratings in the two assessments allows the training to be monitored and measurably evaluated. The score indicates whether an individual increased his or her SBCC knowledge, skills, and competencies during the training, and the ratings can suggest whether further training or capacity strengthening is needed and in which specific SBCC competencies." (Introduction, page 4)
more