"Hate is widespread online, hits everyone, and carries negative consequences. Crowd moderation—user-assisted moderation through, e. g., reporting or counter-speech—is heralded as a potential remedy. We explore this potential by linking insights on online bystander interventions to the analogy of
...
crowd moderation as a (lost) public good. We argue that the distribution of costs and benefits of engaging in crowd moderation forecasts a collective action problem. If the individual crowd member has limited incentive to react when witnessing hate, crowd moderation is unlikely to manifest. We explore this argument empirically, investigatingseveral preregistered hypotheses about the distribution of individual-level costs and benefits of response options to online hate using a large, nationally representative survey of Danish social mediausers (N = 24,996). In line with expectations, we find that bystander reactions, especially costly reactions, are rare. Furthermore, we find a positive correlation between exposure to online hate and withdrawal motivations, and a negative (n-shaped) correlation with bystander reactions." (Abstract)
more
"Content moderation algorithms influence how users understand and engage with social media platforms. However, when identifying hate speech, these automated systems often contain biases that can silence or further harm marginalized users. Recently, scholars have offered both restorative and transfor
...
mative justice frameworks as alternative approaches to platform governance to mitigate harms caused to marginalized users. As a complement to these recent calls, in this essay, I take up the concept of reparation as one substantive approach social media platforms can use alongside and within these justice frameworks to take actionable steps toward addressing, undoing and proactively preventing the harm caused by algorithmic content moderation. Specifically, I draw on established legal and legislative reparations frameworks to suggest how social media platforms can reconceptualize algorithmic content moderation in ways that decrease harm to marginalized users when identifying hate speech. I argue that the concept of reparations can reorient how researchers and corporate social media platforms approach content moderation, away from capitalist impulses and efficiency and toward a framework that prioritizes creating an environment where individuals from marginalized communities feel safe, protected and empowered." (Abstract)
more
"Antisemitism Surges Online: Antisemitic content on platforms like X spiked by 919 percent following the October 7 Hamas attacks, spreading hate that deeply affects Jewish users, especially the young. Enhanced moderation systems combining AI and human oversight, along with stricter regulations, are
...
needed to curb this surge.
Gaming as a Breeding Ground for Hate: Unmoderated gaming spaces foster antisemitic slurs, memes, and symbols, normalizing hate speech in digital culture. Gaming platforms must implement stronger moderation, promote education, and create inclusive communities.
Anonymity Fuels Toxicity: Anonymous accounts enable users to engage in hate speech without any fear of repercussions, fostering a hostile online environment. Balancing user privacy with accountability through improved tracking and penalties can address this issue.
Algorithms Amplify Hate Speech: Social media algorithms prioritize divisive content, creating echo chambers that spread antisemitism and extremist ideas. Platforms must redesign algorithms to limit harmful content, with oversight and updated laws holding them accountable." (Executive summary)
more
"This report offers a high-level, evidence-informed guide to some of the major proposals for how democratic governments, platforms, and others can counter disinformation. It distills core insights from empirical research and real-world data on ten diverse kinds of policy interventions, including fac
...
t-checking, foreign sanctions, algorithmic adjustments, and counter-messaging campaigns. For each case study, we aim to give policymakers an informed sense of the prospects for success—bridging the gap between the mostly meager scientific understanding and the perceived need to act. This means answering three core questions: How much is known about an intervention? How effective does the intervention seem, given current knowledge? And how easy is it to implement at scale?" (Summary, page 1)
more
"This paper examines the counter-violent extremism and anti-terrorism measures in Australia, China, France, the United Kingdom and the United States by investigating how governments leveraged internet intermediaries as their surrogate censors. Particular attention is paid to how political rhetoric l
...
ed to legislation passed or proposed in each of the countries studied, and their respective restrictive measures are compared against the recommendations specified by the United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression. A typology for international comparison is proposed, which provides further insights into a country’s policy focus." (Abstract)
more
"Rather than selling authoritarianism as such, authoritarian narratives focus on themes that have popular appeal—while attributing a wide range of visceral grievances to the shortcomings of democracy. Authoritarian narratives fall into four broad categories: 1. Noninterference, Choice, and Threats
...
to Sovereignty: Narrative attempts to invoke universal themes such as sovereignty, noninterference, and choice which are presented as under threat from the spread of democracy. 2. Exploiting Grievances in the Global South: Tactics designed to attribute the numerous grievances in the Global South to exploitation by the West. 3. Democracies Failing to Deliver: A narrative that takes aim at the efficacy of democracy and, by implication, amplifies the ill-informed narratives about effectiveness of authoritarian governance. 4. Need for a New World Order: Collectively, the claims of Western interference, exploitation, and governance failures are intended to generate disillusionment with democracy and receptiveness to nondemocratic rule. Autocrats use a variety of channels to disseminate these preferred narratives at scale. The four following methods are particularly noteworthy: 1. Social Media: Authoritarians have taken advantage of the enormous—and still growing—social media sphere to promote narratives legitimizing autocracy. They exploit many users’ limited digital literacy skills through information influence campaigns and the employment of bots and online “troll farms” to peddle their preferred worldview. 2. State Broadcasters: Authoritarian actors also disseminate narratives through state media like RT, Sputnik, Xinhua, and China Global Television Network (CGTN). These outlets have the tone and imprimatur of an official news service, giving them a veneer of credibility that expands their reach. 3. Partnerships with Local Media: Authoritarian state-backed outlets aim to embed their content within national information environments. By disseminating preferred narratives through local media outlets and training foreign journalists, authoritarian actors are able to propagate norms of state control over the public information sphere. 4. Foreign Media Cooptation: Finally, authoritarian states are forging partnerships with other state broadcasters. These relationships have the indirect effect of incentivizing self-censorship and enable the intimidation of journalists and activists who criticize authoritarian leadership." (Executive summary, page 1-.2)
more
"Through an online field experiment, we test traditional and novel counter-misinformation strategies among fringe communities. Though generally effective, traditional strategies have not been tested in fringe communities, and do not address the online infrastructure of misinformation sources support
...
ing such consumption. Instead, we propose to activate source criticism by exposing sources’ unreliability. Based on a snowball sampling of German fringe communities on Facebook, we test if debunking and source exposure reduce groups’ consumption levels of two popular misinformation sources. Results support a proactively engaging counter-misinformation approach to reduce consumption of misinformation sources." (Abstract)
more
"How has the Philippine disinformation landscape evolved since 2016? How different was the 2022 presidential election from previous electoral cycles? And what lessons can we learn from electoral triumphs and defeats often associated with disinformation? This report goes to the heart of these questio
...
ns. Our aim is to understand the evolving character of disinformation—the tactics used, actors involved, the wider context in which disinformation unfolds, and the responses of the government, tech platforms, and civil society to these trends. In doing so, we hope to generate actionable insights on impactful responses to disinformation, with a view of preparing for the 2025 midterm election." (Introduction)
more
"The Media Manipulation Casebook is a research platform [launched in 2019] that advances knowledge of misinformation and disinformation and their threats to democracy, public health, and security. The Casebook is a resource for building the field of Critical Internet Studies by equipping researchers
...
with case studies, theory, methods, and frameworks to analyze the interplay of media ecosystems, technology, politics, and society. Though the Technology and Social Change project (TaSC) project has ended as of September 2023, the Casebook site will remain live as a research resource." (About us)
more
"The purpose of this report is to gain a better understanding of the pertinent dynamics and to bolster the design of programming to support the information ecosystem around elections. In aid of this, UNDP sought information through a number of channels, in a review of the relevant literature, a seri
...
es of regional consultations, expert meetings and a survey [...] The various sources all conclude there remains no single panacea to the ills that information pollution brings upon elections. Rather, there is a variety of information pollution programming around elections, each with its own benefits and deficiencies. In order to support the design of a holistic information integrity strategy, this report suggests that programmes seek to address one or more of the following three concerns (1) prevention—to address the supply side of information pollution by preventing or deterring the creation of information pollution, (2) resilience—building public resilience to information pollution limiting the ability of users to be influenced or co-opted by information pollution and (3) countering—identifying and attempting to counter information pollution." (Summary, page 8-9)
more
"Les objectifs du projet cadrent bien avec un certain nombre de politiques ou stratégies comme décrit dans le présent rapport. A titre d’exemple, on citer la politique nationale de lutte contre le terrorisme et l’extrémisme violent dans son Pilier 1, ainsi que la stratégie nationale de la r
...
éconciliation et la cohésion sociale dans son axe 2. En termes d’efficacité, on peut noter un niveau de réalisation des activités et résultats globalement très satisfaisant. Cependant, le dépassement des cibles pour la plupart des activités et indicateurs montrent que le contexte du Pays a conduit le projet à une certaine prudence dans sa planification. Ce qui soulève également la question d’efficience du projet (même si ce critère n’a pas été abordé dans la présente évaluation), notamment en ce qui concerne l’utilisation optimale des ressources (humaines, matérielles et financières) qui pourraient être mieux planifiées pour d’autres activités afin d’obtenir des effets sur une échelle plus large. Le projet a réussi également à mobiliser les acteurs autour de la lutte contre la désinformation, en mettant l’accent sur l’implication des femmes et des jeunes. En matière de collaboration, le projet a renforcé les capacités des acteurs (ambassadeurs de paix, professionnels des médias) afin de mener des activités ensemble pour la prévention et la lutte contre la désinformation et les discours de haine. Les interventions du projet ont également impulsé un changement progressif des attitudes et des perceptions des populations et des acteurs traitant ou consommant l’information. Cela se manifeste par l’adoption d’attitudes positives face aux fausses informations et aux rumeurs, par des membres des communautés. Bien que des mécanismes soient en place pour maintenir ces acquis, à travers des comités de suivi/veille créés dans plusieurs localités, il conviendrait de renforcer ces mécanismes pour la durabilité des acquis par l’appui à l’élaboration des plans de désengagement permettant une meilleure responsabilisation des acteurs locaux." (Conclusions, page 53)
more
"The purpose of this study is twofold: to understand how the Lebanese public consumes news published on traditional and alternative media, with a focus on how they perceive and deal with disinformation campaigns and fake news, and to build a comprehensive view of the organizations and initiatives th
...
at are working on mis/disinformation in Lebanon since 2019. Understanding the media landscape and media consumption in Lebanon will inform future interventions on disinformation. The first part of the report examines the media landscape vis-à-vis the legal framework that governs broadcast and print media. It also offers a glimpse of news consumption behaviors in Lebanese society and discusses disinformation narratives that emerged around major events that have occurred in the last three years. The report will showcase how disinformation thrives in critical moments and provide analysis on the different factors that contribute to the surge in disinformation. The second part of the report presents the findings from the mapping that the Samir Kassir Foundation (SKF) conducted on initiatives and organizations that target mis/disinformation in Lebanon. This section also provides an assessment of some projects that were implemented in the last three years." (Executive summary)
more
"The proliferation of misinformation on social media platforms (SMPs) poses a significant danger to public health, social cohesion and ultimately democracy. Previous research has shown how social correction can be an effective way to curb misinformation, by engaging directly in a constructive dialog
...
ue with users who spread – often in good faith – misleading messages. Although professional fact-checkers are crucial to debunking viral claims, they usually do not engage in conversations on social media. Thereby, significant effort has been made to automate the use of fact-checker material in social correction; however, no previous work has tried to integrate it with the style and pragmatics that are commonly employed in social media communication. To fill this gap, we present VerMouth, the first large-scale dataset comprising roughly 12 thousand claim-response pairs (linked to debunking articles), accounting for both SMP-style and basic emotions, two factors which have a significant role in misinformation credibility and spreading. To collect this dataset we used a technique based on an author-reviewer pipeline, which efficiently combines LLMs and human annotators to obtain high-quality data. We also provide comprehensive experiments showing how models trained on our proposed dataset have significant improvements in terms of output quality and generalization capabilities." (Abstract)
more