"Ziel der hier vorgelegten Studie ist es, analog der Vorgaben des PriME-Handbuchs 2008 von InWEnt, gemäß der Kriterien für Evaluation von Entwicklungszusammenarbeit nach OECD-DAC, unter Rückgriff auf angelsächsische Modelle für PM+E im Bereich der Medienentwicklungszusammenarbeit sowie unter R
...
ückgriff auf die kommunikationswissenschaftliche Literatur ein Konzept für künftige systematische Evaluierungen der mittelfristigen Wirkungen des IIJ-Programms zu entwickeln, das über die Evaluierung unmittelbar nach Ende der Weiterbildungsmaßnahme hinausgeht. Mit Blick auf die bereits vom IIJ formulierten Programmziele zu prüfen ist der Erfolg von Capacity-Building-Maßnahmen auf Ebene der individuellen Akteure (Journalisten), auf Ebene von Organisationen (Redaktionen) sowie auf Systemebene. Hierzu wird eine Methodenkombination (Triangulation) empfohlen, die aufeinander aufbauend sowohl mit quantitativen als auch mit qualitativen Methoden arbeitet und außer der Ebene der Akteure (Journalisten) insbesondere die Ebene der Organisationen (Redaktionen) einbezieht. Um einen ökonomischen Umgang mit Ressourcen sicherzustellen, wird grundsätzlich empfohlen, mit Online-Befragungen zu arbeiten und ergänzend Telefon-Befragungen sowie fallweise vor Ort Face-to-Face-Interviews durchzuführen. Im Rahmen der Studie werden folgende PM+E-Tools für das IIJ entwickelt: Fragebogen für die Online-Befragung der Alumni (quantitativ), Fragebogen für die ergänzende Online-Befragung von Referenzpersonen der Alumni (quantitativ), Leitfaden für telefonische Interviews mit Alumni/Experten in den Zielländern (qualitativ), Leitfaden für Face-to-Face-Interviews mit lokalen Partnern (qualitativ). Der Fragebogen für die Online-Befragung als zentrales PM+E-Tool wurde erfolgreich einem Pre-Test unterzogen. Die Studie schließt mit einem Analyseraster für die Auswertung von im Rahmen von PM+E-Maßnahmen gewonnenen Daten, um gemäß der PriME-Qualitätsschleife institutionelles Lernen zu ermöglichen." (Zusammenfassung, Seite 6)
more
"This publication allows community radio stations to assess their performance regarding: community participation and ownership; radio governance structures and procedures; radio programme structure; radio station management; financial management and resource structure; as well as networking. Using a
...
detailed scoring system, the manual provides a comprehensive list of indicators that categorises assessed stations into four groups: evolving, progressing, performing and model community radios. It considers the issues that are at the heart of community media: public accountability, community representation, locally relevant programming, diverse funding, and due acknowledgement of staff, including volunteers. The manual is clear and concise providing a sound basis for the task it describes. Tailored to the needs of community radios in Nepal, not every single indicator may apply to stations in other countries. Nevertheless, the scoring methodology can easily be adapted to other contexts." (CAMECO Update 4-2009)
more
"This is a practical and well structured manual aiming to use self-evaluation for organisational learning. The book consists of four parts. "The evaluation context" introduces the role of monitoring, evaluation and impact assessment as part of the project cycle logic. "The evaluation process" descri
...
bes steps to be taken in designing and implementing an evaluation. The third part, "evaluation tools", gives a practical insight to major evaluation methods like SWOT analysis, questionnaires (and their design), focus groups or case studies. The fourth and main part provides evaluation guidelines for training courses, newsletters, websites, small libraries and resource centres, online communities, rural radios, databases and selective dissemination of information services." (CAMECO Update 1-2011)
more
"[...] Fortunately, in the last decade in particular, much progress has been made on incorporating social science theory into both campaign design and evaluation, primarily in the health field. Indeed, evaluators are being encouraged to engage in theory testing and/or logic model development. Findin
...
gs from recent meta-analyses suggest that newer communication campaigns are increasingly utilizing theory. In addition, there has been great diversity in the theories being applied in this area, and many of the theories being used most often, including the Theory of Reasoned Action, Social Cognitive Theory, and the transtheoretical “Stages of Change” model, also are widely studied in the health behavior change literature.
An evaluation research team typically consists of program staff in charge of program planning and a program evaluator. Often, the program evaluator is one of the few behavioral or social scientists on the project. Without a theorist on the team, the theory behind the project is likely to remain implicit from the start. The failure to acknowledge or discuss theory from the beginning risks wasting resources on message strategies that are not adequately linked to psychosocial predictors of behavior, and on performance measures that are off the mark. Thus, all program personnel should be involved in theory/logic model development so that the theoretical underpinnings of the project are grounded in more than evaluator assumptions." (Introduction)
more
"This report investigates the impact media and ICTs can have on the lives of the poor, based on the experiences of nine donors and NGOs forming part of the "Building Communication Opportunities (BCO)" alliance. It suggests that radio will have the most influence on social and political change where
...
it is widely accessible, trusted by listeners, and open to inclusive participation. ICTs can help make markets work for the poor, but the surrounding circumstances are highly influential in determining in how far they make a difference. Communication networks appear to be particularly effective in building communities of activists where they enable the pooling of resources and expertise and leverage wider influence on decision-makers. However, the report concludes that evidence of the impact of ICTs is still weak. More debate is needed about how ICTs are best deployed. This requires learning how people really use the tools, as well as a more effective assessment of past and current experiences." (CAMECO Update 1-2009)
more
"Billions of US dollars are invested each year by the public, NGO and private sectors in information and communication technologies for development (ICT4D) projects such as telecentres, village phone schemes, e-health and e-education projects, e-government kiosks, etc. Yet we have very little sense
...
of the effect of that investment. Put simply, there is far too little impact assessment of ICT4D projects. In part that reflects a lack of political will and motivation. But in part it also reflects a lack of knowledge about how to undertake impact assessment of ICT4D. This Compendium aims to address that lack of knowledge. It presents a set of frameworks that can be used by ICT4D practitioners, policy-makers and consultants to understand the impact of informatics initiatives in developing countries. The Compendium is arranged into three parts: overview – explains the basis for understanding impact assessment of ICT4D projects, and the different assessment frameworks that can be used; frameworks – summarises a series of impact assessment frameworks, each one drawing from a different perspective; bibliography – a tabular summary of real-world examples of ICT4D impact assessment." (Introduction, page iii)
more
"The findings from this research reveal the complexity of delivering journalism training and the challenges involved in capturing evidence of impact. Content analysis is a useful tool for measuring change in media output. It can both inform training delivery and provide evidence of improvements to o
...
utput after training has taken place. The detailed and systematic collection and analysis of data can detect subtle changes in content, presentation of output and production elements that might not be captured by other research techniques. Although content analysis provides evidence that the output has changed it may not necessarily be a direct result of the intervention. Content analysis records media output - it does not measure the situation under which the news is produced. For example, during the training period managers might have introduced editorial guidelines or style guides to the organisation independently of the training intervention. Changes in output might be attributable to the actions of management rather than the training experience. Content analysis is also limited to measuring changes to output only – not to the skills acquired by particular trainees. For example, a trainer worked with the news room team to produce a radio package - at the last minute the management refused to broadcast the piece due to editorial policy. Although the improved content was not broadcast, and therefore not included in the content analysis, journalists acquired skills in the production process." (Research learnings, page 8)
more
"This guide helps you gather input at the beginning of your activities to shape your communication strategy. It also gives you the tools to monitor progress and make corrections during implementation. It was not designed as a means for looking back on past work to determine if it was successful. Ins
...
tead, the idea is to prepare up front and evaluate as you go along, so that you may adjust your tactics to ensure success. This guide is an evaluation strategy tool – not a communication planning tool. It will be most useful for those who already have a communication plan in place with clear objectives, messages, strategies and tactics. However, even if you are still in the beginning stages of designing a communication plan, it is never too early to start thinking about evaluation." (About this guide, page 1)
more
"Essentials for Excellence is not comprehensive but it does equip you with sufficient know-how to be able to plan and manage useful RM&E for your AI/PI strategic communication. For those seeking further information, web-links and key references are included. The guide is aimed at users who want stra
...
ightforward answers to often quite complex questions (including sampling, research design and pre-testing) who need handy tips, and who are looking for practical rather than academic advice. To be of any use, the guide is highly dependent upon your ability and willingness to adapt the suggestions offered to suit your circumstances and needs. This is also a useful basic reference for researching, monitoring and evaluating other Strategic Communication initiatives whether you are conducting an initial assessment for a child protection communication programme or a final evaluation of a hygiene promotion project." (Foreword)
more
"The symposium Measuring Change. Planning, Monitoring, Evaluation in Media Development focused on the utilisation aspect of evaluation1: The adding of “Planning” to “Monitoring and Evaluation” in the subtitle indicates that emphasis was laid on learning from monitoring and evaluation experie
...
nces, to facilitate the improvement of existing projects and programmes at all levels, from planning to implementation and follow-up." (Executive Summary)
more
"What is dialog, and how can it be measured in a meaningful way? In this article, Jacobson presents an approach to assessing participatory communication based on communication in the form of dialog as conceptualized by Jurgen Habermas." (Abstract)
"In this paper we will be looking at the evidence of impact from interventions in the so-called ‘traditional’ media – TV, radio and, to a much lesser extent, print – and factors contributing to that impact. By ‘media interventions’ we mean using mass-media in support of development objec
...
tives and as part of development projects to help bring about behaviour change – for instance, the use of TV/radio spots (also known as PSAs – public service announcements) to promote condom use, or radio programmes to promote better agricultural practices. This should not be confused with so-called ‘media development’, which aims to create independent and professional media, recognising the potential of the media as an important independent agent of social and political change, for example in governance." (Introduction)
more
"This note offers a brief and non-technical introduction to indicators and monitoring tools relevant to communication for development in Danida’s countries of cooperation. It is primarily aimed at supporting staff at Danish representations or at HQ responsible for preparing and managing Danish bil
...
ateral development assistance. The note may also be of assistance to those in the partner organisations of Danish aid who are responsible for monitoring, their Danida advisers, and consultants who assist in preparing and managing programmes and projects. [...] This note contains a background chapter on the strategic framework and types of Danish support for communication and development (Chapter 2), followed by a presentation of internationally defined goals, indicators and targets (Chapter 3). Subsequently, Chapter 4 addresses the issue of objectives and indicators at the national level, i.e. in PRSPs. Finally, Chapters 5, 6 and 7 are concerned with the level of the sector itself and the Danish support for it (SPS - sector programme support), discussing relevant indicators and related monitoring tools and methods at this level." (Introduction)
more
This document presents a methodology for evaluating the contribution that communication interventions can make to accountable governance. CommGAP engages in complementary programme areas in an effort to amplify citizen voice; promote free, independent, and plural media systems; and help government i
...
nstitutions communicate better with their citizens. The three programme areas are: research and advocacy; capacity building and training; and support for development projects. This document describes the evaluation framework - that is, the outcome and impact indicators, and the methodology behind the assessment - that CommGAP has developed.
more
"This paper discussed the possibility to improve public communication campaign theory, by making use of data obtained through mass media health communication campaign evaluations. The idea of an ‘engineering’ approach to campaign design, where theory and scientific findings are systematically us
...
ed and adopted for practical problems, plays an important role in the discussion [...] In the third part, a sample of 33 evaluation reports for mass media health communication campaigns was analyzed. 32 of these reports have not been published in an scientific journal. The evaluations were conducted in 22 different countries. The analysis of the reports focused on the campaign goals, evaluation outcome measures, research design and methods, and on questions of validity. The findings suggest that theory is not widely and consequently used to inform health mass communication campaigns or their evaluations – with notable exceptions. While there is a large number of outcomes measured, they seem to be taken out of theoretical context. Neither the campaign goals nor the evaluation measures reflect the large number of possible communication strategies that the various communication or behavior-change models and theories imply. Unintended campaign effects were mostly ignored. In very few cases the campaign designers or evaluators make use of an effects model or program logic model. This is one of the areas where I see the possibility of an important improvement. The methodology of campaign evaluation is relatively homogenous across the 33 cases in regards to data collection method. Standardized questionnaires are the dominating data collection instrument. Non-reactive observation or tracking methods are very rare. A surprising two thirds of the evaluations did not use multivariate analysis, and the reliance on self-reports raises questions of reliability." (Summary, page 120-121)
more
"The 29 indicators in the Guide measure the reach, usefulness, and use, as well as the collaboration, and capacity building engendered through information products and services. The innovative “Conceptual Framework for Monitoring and Evaluating Health Information Products and Services” shows how
...
they contribute to the initial, intermediate, and long-term outcomes of health development efforts—something which perhaps we all instinctively recognize but have failed to track in the past. Such a track record will go a long way to making information products and services an integral part of future global health development efforts. What makes this guide special is that it brings together knowledge about monitoring and evaluating information products and services from dozens of health organizations—all members of the HIPNET community of practice." (Foreword)
more
"Contains 11 studies, mainly on information projects, each with between four and eight stories told by people involved in those studies. Each set of stories is prefaced by a summary of the study." (commbox)