"Since its Joint Communication on Hybrid Threats, the EU has publicly recognized the risks to its security posed by non-traditional means aimed at undermining its legitimacy. The propagation of disinformation including misleading political advertising serves as a key example of how the Commission’
...
s perception of the EU’s vulnerability to hybrid threats in times of geopolitical instability is shaping its regulatory policies. This article uses the framework of regulatory mercantilism, which argues that in conditions of perceived vulnerability, a state-like actor will reassert regulatory control based on a security logic in areas previously characterized by self-regulatory regimes. This article considers the Commission’s 2019–2024 priorities, and how the spheres of technology, security, and democracy policies are intersecting as a response to hybrid threats. As a result, online platform governance in the EU is being substantially restructured with a move from systems of selfregulation to co-regulation backed by sanction as a means of combating hybrid threats online. The Commission’s “taking back control” from platforms in the context of a digital sovereignty agenda serves as an example of regulatory mercantilism in digital policy, which sees the Commission seek to promote regulatory strength in response to perceived vulnerability." (Abstract)
more
"Trust lies at the heart of the disinformation crisis, as citizens must decide which narratives to follow and whether to accept “alternative truths.” Therefore, trust in institutions that publish reliable information can act as a shield against disinformation. This comparative study investigates
...
the role of trust in news media and political actors and general attitudes toward democracy in the willingness to spread disinformation (i.e., likes, shares, or comments). Findings of this study show that news media trust plays a subordinate role but that trust in social media news has a strong relationship with willingness to spread disinformation. In 2020, citizens in the United States and United Kingdom who had high trust in their governments were more willing to spread disinformation, whereas in France and Belgium, citizens who trusted opposition leaders were more likely to do so. Moreover, citizens who were satisfied with democracy appeared to be less vulnerable to disinformation, with the exception of those in the United States. Therefore, political actors bear great responsibility for the current (dis)information crisis because they can exploit citizens’ trust to their advantage." (Abstract)
more
"Although disinformation has become a popular concept, we lack an integrative conceptualization that connects the actors, intentions and techniques underlying deceptive information. In this article, we conceptualize disinformation as a context-bound deliberate act for which actors covertly deceive r
...
ecipients by de-contextualizing, manipulating or fabricating information to maximize utility with the (targeted) outcome of misleading recipients. This conceptualization embeds fragmented accounts of disinformation in a networked and participatory information logic, and offers a comprehensive account of the conditions under which different actors may decide to deceive, how they deceive, and what they aim to achieve by deceiving recipients. Our conceptualization may inform (machine-learning) techniques to detect disinformation and interventions that aim to trigger suspicion by breaking through the truth-default state." (Abstract)
more
"Zwischen dem 19.09.2022 und 18.09.2023 wurde das Projekt „Narrative über den Krieg Russlands gegen die Ukraine“ (NUK, im Folgenden NUK) vom Institute for Strategic Dialogue (ISD Germany) im Auftrag der Bundeszentrale für politische Bildung/bpb umgesetzt. Ziel des Projekts war es, Desinformati
...
on gegenüber ukrainischen Geflüchteten innerhalb russisch- und arabischsprachiger Communities in Deutschland mithilfe von vier online verbreiteten Erklärvideos zu beleuchten. Dieser Abschlussbericht stellt die zentralen Ergebnisse der Analysen zu Desinformationsnarrativen dar, skizziert den Entstehungsprozess sowie die Inhalte der Videos und führt eine Evaluation der Verbreitung der Videos durch." (Seite 3)
more
"Visuals are expected to amplify the impact of disinformation, but it is rarely specified how, and what exactly distinguishes them from text. This article is one of the first to treat visual disinformation as its own type of falsehood, arguing that it differs from textual disinformation in its produ
...
ction, processing and effects. We suggest that visual disinformation is determined by varying levels of modal richness and manipulative sophistication. Because manipulated visuals are processed differently on a psychological level, they have unique effects on citizens’ behaviours and attitudes." (Abstract)
more
"As Russia launched its full-scale invasion of Ukraine in February 2022, social media was rife with pro-Kremlin disinformation. To effectively tackle the issue of state-sponsored disinformation campaigns, this study examines the underlying reasons why some individuals are susceptible to false claims
...
and explores ways to reduce their susceptibility. It uses linear regression analysis on data from a national survey of 1,500 adults (18+) to examine the factors that predict belief in pro-Kremlin disinformation narratives regarding the Russia–Ukraine war. Our research finds that belief in Pro-Kremlin disinformation is politically motivated and linked to users who: (1) hold conservative views, (2) trust partisan media, and (3) frequently share political opinions on social media. Our findings also show that exposure to disinformation is positively associated with belief in disinformation. Conversely, trust in mainstream media is negatively associated with belief in disinformation, offering a potential way to mitigate its impact." (Abstract)
more
"The popular encrypted messaging and chat app WhatsApp played a key role in the election of Brazilian President Jair Bolsonaro in 2018. The present study builds on this knowledge and showcases how the app continued to be used in a governmental operation spreading false and misleading information pop
...
ularly known in Brazil as the Office of Hatred (OOH). By harnessing in-depth expert interviews with documentarians of the office’s daily operations—researchers, journalists, and fact-checkers (N = 10)—this study draws up a chronology of the OOH. Via this methodological approach, we trace and chronologize events, actions, and actors associated with the OOH. Specifically, findings (a) document the rise of antipetismo and disinformation campaigns associated with attacks on the Brazilian Worker’s party from 2012 until the election of Bolsonaro in 2018, (b) describe the emergence of the OOH at the heels of the election and subsequent radicalization in WhatsApp groups, (c) provide an overview of the types of disinformation that are spread on the app by the OOH, and (d) illustrate how the OOH operates by mapping key actors and places, communicative strategies, and audiences. These findings are discussed in light of ramifications that government-sponsored forms of disinformation might have in other antidemocratic polities marked by strongman populist leadership." (Abstract)
more
"This report summarizes and groups the most common recommendations to address gendered disinformation and analyzes gaps between the evidence base and the recommendations made to date. This review is aimed toward those who seek to develop solutions to gendered disinformation in its different forms, a
...
lthough stakeholders across sectors should consider how they can operationalize these recommendations. While reviewers found many actionable recommendations, they also noticed significant gaps and missed opportunities for impact and collaboration." (Executive summary, page 4)
more
"This research aims to understand the patterns, impact and modus operandi of gendered disinformation campaigns against women in politics in Brazil, Hungary, India, Italy and Tunisia. The case studies explore how gendered disinformation has been used by political movements, and at times the governmen
...
t itself, to undermine women's political participation, and to weaken democratic institutions and human rights. Crucially, the research also looks at the responsibilities and responses that both state actors and digital platforms have taken - or most often, failed to take - to address this issue. Despite the regional and cultural diversity of the countries analyzed, several trends and patterns emerge related to gendered disinformation." (Executive summary)
more
"This article examines gender-based disinformation, highlighting findings and research gaps. A PRISMA review method was used, reviewing 143 articles from 2013 to early 2023. Only 14 studies directly assessed gender dimensions in disinformation. Outcomes show that Twitter is the most studied platform
...
, with a focus on content analysis, and Global South perspectives are neglected. Using an intersectional feminist critical approach, our findings suggest three recurring themes: women are the frequent target of disinformation, narratives connect to dark politics, and proposed solutions rely on media literacy programs and regulatory measures." (Abstract)
more
"This paper examines the Bad News series of games, created by the Cambridge University Social Decision Making Lab and DROG Group, as an educational tool. More specifically, it considers Bad News as a persuasive game, within the umbrella of gamification. After considering the history and context of t
...
he game, the educational, motivational, and informational theoretical frameworks of the games, research, and criticisms, this paper concludes that the games are not, in themselves, transformative." (Abstract)
more
"This report examines why the precarious middle class in the Philippines has been particularly susceptible to digital disinformation. It focuses on two key imaginaries that disinformation producers weaponized in the year leading up to the 2022 national elections. The first was a long-simmering anti-
...
Chinese resentment, which racist social media campaigns about Philippines-China relations targeted. The other was a yearning for a “strong leader”, which history-distorting campaigns about the country’s Martial Law era amplified. Ironically, some practices adopted by members of the public to protect themselves from the toxicity and vitriol of online spaces increased their vulnerability to digital disinformation. The cumulative impact of these was for people to dig deeper into their existing imaginaries, something that disinformation producers targeted and exploited. We offer two suggestions for future counter-disinformation initiatives. The first has to do with addressing people’s vulnerability to the weaponization of their shared imaginaries. Counter-disinformation initiatives can move past divisive imaginaries by infusing creativity in imparting information. Collaborating with well-intentioned professionals in the media and creative industries would be key to these kinds of initiatives. The second has to do with addressing people’s media consumption practices. These practices tend to open them up to sustained and long-term digital disinformation campaigns, which provide them with problematic imaginaries to dig into. To establish a similarly robust common ground of reality, counter-disinformation initiatives should themselves be programmatic, not ad hoc." (Executive summary)
more
"In particular we recommend to strengthen collaboration (platforms should adopt a collaborative approach involving various stakeholders, including governments, civil society organisations, and fact-checkers, to counter the spread and impact of disinformation. This can include sharing information, be
...
st practices, and resources to develop effective strategies); enhance transparency (platforms should prioritise transparency by providing clear and comprehensive information on their policies, algorithms, and content moderation processes. Users should have a better understanding of how their data is used, and how algorithms work to prevent the amplification of false and misleading narratives); implement effective content moderation (platforms need to allocate sufficient resources to effectively monitor and moderate harmful content. This includes investing in advanced AI systems and human moderation teams to detect and remove disinformation in a timely manner. Transparent and consistent guidelines should be in place to ensure fairness and accountability in content moderation decisions); promote fact-based information (platforms should prioritise the promotion of fact-based information from reliable sources. This can be done by partnering with credible news organizations and fact-checkers to provide accurate information and combat false narratives. Advertising promoting climate change denial or other forms of misinformation should be prevented); improve the access to data for researchers (platforms should make efforts to provide access to data for independent researchers to evaluate the effectiveness of their policies and initiatives in countering disinformation. This will enable better analysis and understanding of the impact of disinformation and the effectiveness of countermeasures); comply with regulatory frameworks (platforms should fully comply with regulatory frameworks, such as the Digital Services Act (DSA) or other relevant International, EU and National laws and regulations, that provide for obligations on addressing disinformation and mitigating associated risks, the Code of Practice on Disinformation that aims to commit signatories to a range of actions to counter disinformation. These actions include providing transparency reports on political advertising, restricting advertising placements on disinformation websites, disrupting advertising revenue for purveyors of disinformation, and enabling user feedback and fact-checking mechanisms. In this framework, compliance should not be limited to large platforms but extended, with adjustments, to smaller platforms to ensure a comprehensive approach)." (Recommendations, page 6)
more
"This paper provides researchers and policymakers with an overview of which individual-level interventions are likely to influence the spread of, susceptibility to, or impact of misinformation. We review the evidence for the effectiveness of four categories of interventions: boosting (psychological
...
inoculation, critical thinking, and media and information literacy); nudging (accuracy primes and social norms nudges); debunking (fact-checking); and automated content labeling. In each area, we assess the empirical evidence, key gaps in knowledge, and practical considerations. We conclude with a series of recommendations for policymakers and tech companies to ensure a comprehensive approach to tackling misinformation." (Abstract)
more
"This Synthesis Report provides a formal systematic review of scientific literature on countermeasures for mitigating digital misinformation. 588 peer-reviewed global publications from many disciplines were the focus of this study in order to highlight the most effective countermeasures for mitigati
...
ng potential effects of misinformation, disinformation, and a range of related phenomena. According to the report’s selected publications, the four most often endorsed countermeasures are corrective information materials, information and media literacy content, content moderation, and content labeling. More than 10% of the analyzed publications validated these countermeasures. Research reveals several patterns in the investigation of countermeasures for combating misinformation across disciplines. Social sciences emerge as a leading area of scholarship in exploring various strategies, with one exception: content moderation, which is more actively tested in publications from the physical sciences. Simultaneously, experiment-based methodologies highlight content labeling and content reporting as the most effective countermeasures. There is no substantial geographic variation in what researchers are finding. Five important limitations in current research were identified: 1. Few publications test specific countermeasures with real-world data; 2. Some of the solutions offered in the literature are too broad to be tested; 3. Methods that are more likely to bring critical perspectives, such as interviews, focus groups, and discourse analysis, are used less often than quantitative methods; 4. Some countermeasures are understudied in particular disciplines. For example, redirection, or information and media literacy are understudied in the health and physical sciences; 5. The literature in English that is analyzed pays insufficient attention to the problem beyond a few Western countries." (Synopsis)
more
"This Synthesis Report examines the effectiveness of countermeasures against misinformation on social media platforms, focusing on the two most examined remedies: content labeling and corrective information interventions. A meta-analysis is a research process for synthesizing and aggregating the fin
...
dings of many, independent studies, using statistical methods to calculate overall effects from multiple data sources. A meta-analysis of 43 studies from 18 peer-reviewed manuscripts was selected from a comprehensive database of 4,798 publications. First, there is an emerging scientific consensus that content labels and corrective information help people evaluate misinformation on social media platforms. Other mitigation strategies may be viable, but there is less consensus about their effectiveness. Second, understanding the global information environment requires more research: (i) from countries around the world, (ii) about user experiences in languages other than English, (iii) with genuine access to social media data from firms, (iv) that allows scientists to standardize measures and definitions for robustly reporting the results of independent research." (Synopsis)
more
"Alarmist narratives about online misinformation continue to gain traction despite evidence that its prevalence and impact are overstated. Drawing on research examining the use of big data in social science and reception studies, we identify six misconceptions about misinformation and highlight the
...
conceptual and methodological challenges they raise. The first set of misconceptions concerns the prevalence and circulation of misinformation. First, scientists focus on social media because it is methodologically convenient, but misinformation is not just a social media problem. Second, the internet is not rife with misinformation or news, but with memes and entertaining content. Third, falsehoods do not spread faster than the truth; how we define (mis)information influences our results and their practical implications. The second set of misconceptions concerns the impact and the reception of misinformation. Fourth, people do not believe everything they see on the internet: the sheer volume of engagement should not be conflated with belief. Fifth, people are more likely to be uninformed than misinformed; surveys overestimate misperceptions and say little about the causal influence of misinformation. Sixth, the influence of misinformation on people’s behavior is overblown as misinformation often “preaches to the choir.” To appropriately understand and fight misinformation, future research needs to address these challenges." (Abstract)
more
"On average in the 16 countries surveyed, 56% of internet users frequently use social media to stay informed about current events, far ahead of television (44%). However, it is worth noting that differences exist among population groups: television is the primary source in the most developed countri
...
es (55% compared to 37% for social media), while it lags significantly in countries with high (42% vs 63%) or medium/low levels of Human Development Index (HDI) (37% vs 68%) [...] The significance of social media as a source of information, especially during election campaigns, is even more crucial given that citizens believe disinformation is highly prevalent there. Across all 16 countries, 68% of internet users told us that social media is the place where disinformation is most widespread, far ahead of groups on online messaging apps (38%) and media websites/apps (20%). This sentiment is overwhelmingly prevalent in all countries, age groups, social backgrounds, and political preferences. This is even more important and citizens feel that the issue of disinformation is a real threat: 85% express concern about the impact and influence of disinformation on their fellow citizens." (Analysis of key results)
more
"Covid Conspiracy Theories in Global Perspective examines how conspiracy theories and related forms of misinformation and disinformation about the Covid-19 pandemic have circulated widely around the world. Covid conspiracy theories have attracted considerable attention from researchers, journalists,
...
and politicians, not least because conspiracy beliefs have the potential to negatively affect adherence to public health measures. While most of this focus has been on the United States and Western Europe, this collection provides a unique global perspective on the emergence and development of conspiracy theories through a series of case studies. The chapters have been commissioned by recognized experts on area studies and conspiracy theories. The chapters present case studies on how Covid conspiracism has played out (some focused on a single country, others on regions), using a range of methods from a variety of disciplinary perspectives, including history, politics, sociology, anthropology, and psychology. Collectively, the authors reveal that, although there are many narratives that have spread virally, they have been adapted for different uses and take on different meanings in local contexts." (Publisher description)
more