"In this article we examine PolitiFact’s fact-checking process. We collect a random sample of 858 fact-checks and evaluate them in the light of criteria based on or inspired by fact-checking literature and the International Fact-checking Network’s code of principles. Our analysis reveals the fol
...
lowing: in general, PolitiFact fared well. However, from the point of view of the criteria, its practices leave room for improvement. The biggest issue is complex propositions. These are statements containing multiple claims, i.e., more than one proposition. In 279 cases (33% of our sample), PolitiFact checks a complex proposition and assigns one truth rating to it. This is problematic as the reader might misinterpret the truthfulness of an individual claim. PolitiFact also checks claims that we considered uncheckable. These are statements whose truthfulness cannot be defined in practice, e.g., claims about the future and vague claims. In 92 cases (11% of our sample), PolitiFact checked a claim like this. The article ends with a discussion about the limitations of the criteria used here." (Abstract)
more
"False political information spreads far and fast across social media, with negative consequences for society. Individual users play a key role in sharing such material, extending its range through the phenomenon of organic reach. An online experiment tested the hypotheses that higher trust in the s
...
ource of false information, and lower agreeableness of the person encountering it, would predict their likelihood of extending its reach. One hundred and seventy-two participants saw real examples of disinformation stories that had been posted to social media and rated their likelihood of sharing and interacting with it in other ways. Neither trust in the source nor agreeableness influenced organic reach. However, people lower in conscientiousness rated themselves as more likely to extend its reach, as did people who believed the stories more likely to be true." (Abstract)
more
"In most African countries, “fake news,” politically motivated disinformation, and misinformation in the media were common occurrences before these became a preoccupation in the Global North. However, with a fast-growing population of mobile users, and the popularization of apps such as WhatsApp
...
, misinformation has become much more pervasive across the continent. Researchers have shown that perceived exposure to false information is high in some African countries, and yet citizens often share made-up news intentionally. This article explores the motivations and contributing factors for sharing misinformation in six sub-Saharan African countries. Our analysis of 12 focus groups with university students reveals two common motivations: civic duty and fun. The sharing of political (dis)information was uneven, but common among students with high levels of self-reported political engagement. We also present an array of cues used to determine credibility, which often determines the shareability of information. Crossnational differences are also discussed." (Abstract)
more
"This study explores the various methods of combating fake news on social media such as Natural Language Processing, Hybrid model. We surmised that detecting fake news is a challenging and complex issue, however, it remains a workable task. Revelation in this study holds that the application of hybr
...
id-machine learning techniques and the collective effort of humans could stand a higher chance of fighting misinformation on social media." (Abstract)
more
"This research scrutinizes the content, spread, and implications of disinformation in Brazil’s 2018 pre-election period. It focuses specifically on the most widely shared fake news about Lula da Silva and links these with the preexisting polarization and political radicalization, ascertaining the
...
role of context. The research relied on a case study and mixed-methods approach that combined an online data collection of content, spread, propagators, and interactions’ analyses, with in-depth analysis of the meaning of such fake news. The results show that the most successful fake news about Lula capitalized on prior hostility toward him, several originated or were spread by conservative right-wing politicians and mainstream journalists, and that the pro-Lula fake news circulated in smaller networks and had overall less global reach. Facebook and WhatsApp were the main dissemination platforms of these contents." (Abstract)
more
"Auf der Grundlage der identifizierten Schutzlücken erarbeitet das Gutachten mögliche Gegenmaßnahmen und beschreibt die nötigen Wirkungsvoraussetzungen. Die zentrale Frage lautet: Welche Risikopotenziale für individuelle und gesellschaftliche Interessen weist Desinformation auf und welche Gover
...
nance-Maßnahmen können darauf adäquat reagieren? Die Beantwortung dieser Leitfrage erfolgt dabei in drei Schritten: Vorangestellt (Kap. 2) werden die in wissenschaftlichen und medienpolitischen Diskussionen differenzierten Erscheinungsformen von Desinformation sowie ihre jeweiligen Begriffsverständnisse zusammengefasst und auf ihre Risikopotenziale hin untersucht. Ziel ist es, die Spannweite betroffener Phänomene aufzuzeigen und sie von anderen Erscheinungsformen und Begrifflichkeiten zu differenzieren. Dabei erfolgt auch eine Bewertung der Abgrenzungsindikatoren im Hinblick auf die Nutzbarkeit für rechtliche bzw. regulatorische Anknüpfungspunkte. Zudem wird hier kurz der Stand der Forschung hinsichtlich der abträglichen Effekte von Desinformation für individuelle und gesellschaftsbezogene Schutzziele einbezogen; Kenntnisse über Wirkungen von Desinformation auf einzelne Rezipientinnen und Rezipienten liegen hier bislang nur lückenhaft vor. Dies steht in gewissem Kontrast zu den eher impliziten Unterstellungen, die den aktuellen Regulierungsforderungen zugrunde liegen. Dort, wo empirische Evidenzen vorliegen, zeigt das Gutachten jedenfalls vermutete Effekte und ihre Risikopotenziale auf. Im zweiten Schritt (Kap. 3) wird der geltende Rechtsrahmen daraufhin untersucht, welche gesetzlichen Vorkehrungen gegen eine Risikorealisierung bereits bestehen und welche untergesetzlichen Initiativen sich auf Ebene von Ko- und Selbstregulierung entwickelt haben, die als Gegenkraft wirken können. An dieser Stelle setzt die Untersuchung die Arbeit des GVK-Gutachtens von Möller, Hameleers und Ferreau fort,5 indem bestehende risikospezifische Schutzlücken mit Blick auf die identifizierten Risikopotenziale herausgearbeitet werden. Dort, wo Schutzlücken erkennbar werden, zeigt das Gutachten staatliche Handlungsmöglichkeiten und -grenzen auf. Im dritten Schritt (Kap. 4) werden regulatorische Ansatzpunkte und -instrumente, die in der Lage sind, die identifizierten Schutzlücken zu schließen, beleuchtet. Klassische Ansätze der Medienregulierung eignen sich hier meist begrenzt, da für den Bereich der öffentlichen Kommunikation der Grundsatz gilt, dass es nicht staatliche Aufgabe sein kann und darf, über die Einstufungen wahr/unwahr oder erwünschte Meinung/unerwünschte Meinung zu befinden. Hier müssen – soweit überhaupt Handeln angezeigt ist – Wege staatsferner, prozeduraler Steuerung betreten6 oder alternative Formen von inhalts- und technikbezogener Governance entwickelt werden. Alternativ oder ergänzend kommen neben Maßnahmen, die diskursermöglichend oder -unterstützend wirken, auch Gegenmaßnahmen in Betracht, die informationsintegritätssteigernde oder -integrierende Wirkungen haben können." (Seite 4-5)
more
"In the present report, the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression examines the threats posed by disinformation to human rights, democratic institutions and development processes. While acknowledging the complexities and challenges posed
...
by disinformation in the digital age, the Special Rapporteur finds that the responses by States and companies have been problematic, inadequate and detrimental to human rights. She calls for multidimensional and multi-stakeholder responses that are well grounded in the international human rights framework and urges companies to review their business model and States to recalibrate their responses to disinformation, enhancing the role of free, independent and diverse media, investing in media and digital literacy, empowering individuals and rebuilding public trust." (Summary)
more
"In this article, we analyze the spread of political disinformation in events of discursive struggles on Twitter, during the 2018 presidential election in Brazil. These were disputes for the hegemonic narrative between two stories based on opposed hashtags: one based on news from mainstream media an
...
d the other, based on disinformation, mostly from hyperpartisan outlets. Our goal was to understand how hyperpartisan outlets created and shaped these discursive struggles and the strategies used to spread disinformation to create an “alternative narrative” to the facts. Our case study is focused on two discursive struggles, for which we will use critical discourse analysis and social network analysis. Our findings suggest that (1) the structure of the hashtag wars was very polarized and right-wing groups had higher exposure to hyperpartisan content and disinformation, while traditional media discourse circulates more among other different ideological clusters; (2) rightwing hyperpartisan media mostly used biased framing and polarized ideological discourse structure as manipulative strategies to reframe the events and create a counter-narrative (and thus, to create the dispute); and (3) opinion leaders were major spreaders of disinformation among far-right users, as they reinforced hyperpartisan content and became key actors in the discursive struggles (and thus, reinforced the dispute)." (Abstract)
more
"First, disinformation is enabled by the Philippines' history of colonialism and martial law, high social media usage and low digital literacy, compounding crises, strongman governance, and sexist and misogynist rhetoric from elected leaders. While the first three factors create conditions that gene
...
rally enable disinformation, the final two factors directly contribute to the prevalence of gendered disinformation. Second, disinformation is used as a tool to confuse, distract, revise, and discredit, with the aim of suppressing dissent. More critically, analysis of Twitter data indicates that disinformation used to discredit relies on the policing of gender to undermine political opposition. Third, thsi policing of gender results in the weaponization of gendered relationships, which encourages narratives that reinforce gender inequalities. Finally, illiberal actors benefit from an environment marked by gender inequality, as such conditions support hegemonic masculine norms, which in turn consolidate authoritarian power. As a result, President Duterte and his supporters benefit from disinformation that encourages gender inequality and pursue disinformation as a tactic for weakening democratic governance in the Philippines." (Executive summary)
more
"Drawing on a qualitative analysis of 7,506 tweets by state-sponsored accounts from Russia’s GRU and the Internet Research Agency (IRA), Iran, and Venezuela, this article examines the gender dimensions of foreign influence operations. By examining the political communication of feminism and women
...
s rights, we find, first, that foreign state actors co-opted intersectional critiques and countermovement narratives about feminism and female empowerment to demobilize civil society activists, spread progovernment propaganda, and generate virality around divisive political topics. Second, 10 amplifier accounts—particularly from the Russian IRA and GRU—drove more than one-third of the Twitter conversations about feminism and women’s rights. Third, high-profile feminist politicians, activists, celebrities, and journalists were targeted with character attacks by the Russian GRU. These attacks happened indirectly, reinforcing a culture of hate rather than attempting to stifle or suppress the expression of rights through threats or harassment. This comparative look at the online political communication of women’s rights by foreign state actors highlights distinct blueprints for foreign influence operations while enriching the literature about the unique challenges women face online." (Abstract)
more
"This report strives to build awareness of the direct and indirect impacts of gendered and sexualized disinformation on women in public life, as well as its corresponding impacts on national security and democratic participation. In an analysis of online conversations about 13 female politicians acr
...
oss six social media platforms [Prime Minister Jacinda Ardern, New Zealand, Secretary of State for the Home Department Priti Patel, UK, Deputy Prime Minister Chrystia Freeland, Canada, and 10 US politicians], totaling over 336,000 pieces of abusive content shared by over 190,000 users over a two-month period, the report defines, quantifies, and evaluates the use of online gendered and sexualized disinformation campaigns against women in politics and beyond. It also uses three in-depth interviews and two focus groups to emphasize the impacts gendered abuse and disinformation have on women’s daily lives." (Executive summary, page 1)
more
"This article examines the ongoing dynamics in the regulation of disinformation in Europe, focusing on the intersection between the right to freedom of expression and the right to privacy. Importantly, there has been a recent wave of regulatory measures and other forms of pressure on online platform
...
s to tackle disinformation in Europe. These measures play out in different ways at the intersection of the right to freedom of expression and the right to privacy. Crucially, as governments, journalists, and researchers seek greater transparency and access to information from online platforms to evaluate their impact on the health of their democracies, these measures raise acute issues related to user privacy. Indeed, platforms that once refused to cooperate with governments in identifying users allegedly responsible for disseminating illegal or harmful content are now expanding cooperation. However, while platforms are increasingly facilitating government access to user data, platforms are also invoking data protection law concerns as a shield in response to recent efforts at increased platform transparency. At the same time, data protection law provides for one of the main systemic regulatory safeguards in Europe. It protects user autonomy concerning data-driven campaigns, requiring transparency for internet audiences about targeting and data subject rights in relation to audience platforms, such as social media companies." (Abstract)
more
"Based on key term searches and forward and backward citation mapping, we constructed a review of 223 studies published since 1972 related to countermeasures designed to combat influence operations. Each identified study included: (1) a source of variation in exposure to countermeasures; (2) a clear
...
ly defined outcome of interest for some specified population; (3) relevance to thinking about the potential of an intervention to impact real-world behavior; and (4) enough detail to evaluate the credibility of the findings. This approach amounts to sampling the foundational research surrounding countermeasures and thus incorporates the collective judgement of this emerging field. All of the studies we identified examined user-focused countermeasures, i.e., those aimed at the consumers of disinformation. None looked at countermeasures aimed at impacting the influence operations directly. There exists a mismatch between the major interventions taken by platforms - algorithmic downranking, content moderation, redirection, and deplatforming accounts - and those studied by the research community. Most papers we reviewed focus on one particular method for countering information operations: fact-checking and its many offshoots. The types of interventions employed by social media companies on actual users are understudied. We recommend further research on four key areas: (1) measuring the impact of the most common interventions by social media platforms, (2) assessing the impact of countermeasures on real-world behaviors (both online and offline), (3) evaluating the efficacy of countermeasures in non-Western contexts, and (4) studying countermeasures that target the creators of disinformation content in addition to studying consumer-facing policies." (Essay summary)
more
"Disinformation is proliferating on the internet, and platforms are responding by attaching warnings to content. There is little evidence, however, that these warnings help users identify or avoid disinformation. In this work, we adapt methods and results from the information security warning litera
...
ture in order to design and evaluate effective disinformation warnings. In an initial laboratory study, we used a simulated search task to examine contextual and interstitial disinformation warning designs. We found that users routinely ignore contextual warnings, but users notice interstitial warnings—and respond by seeking information from alternative sources. We then conducted a follow-on crowdworker study with eight interstitial warning designs. We confirmed a significant impact on user information-seeking behavior, and we found that a warning’s design could effectively inform users or convey a risk of harm. We also found, however, that neither user comprehension nor fear of harm moderated behavioral effects. Our work provides evidence that disinformation warnings can—when designed well—help users identify and avoid disinformation. We show a path forward for designing effective warnings, and we contribute repeatable methods for evaluating behavioral effects. We also surface a possible dilemma: disinformation warnings might be able to inform users and guide behavior, but the behavioral effects might result from user experience friction, not informed decision making." (Abstract)
more
"[...] this Research Report has selected four country case studies: Sweden, Canada, the United Kingdom, and France. Obviously, other cases would have been interesting, particularly the United States. But the United States is already at the centre of other works, including by Hybrid CoE. Being divers
...
e in terms of power, geopolitical situation, and systems of government, the four selected countries offer a good sample of what liberal democracies, different in colour, shape and size, can propose to counter disinformation. Finally, this Research Report will attempt to draw some general lessons from these four cases, on what an effective state response to disinformation should involve." (Page 9)
more
"There is no commonly agreed typology framework, specific categorization criteria, and explicit definitions as a basis to assist the further investigation of the area. Our work is focused on filling this need. Our contribution is twofold. First, we collect the various implicit and explicit disinform
...
ation typologies proposed by scholars. We consolidate the findings following certain design principles to articulate an all-inclusive disinformation typology. Second, we propose three independent dimensions with controlled values per dimension as categorization criteria for all types of disinformation. The taxonomy can promote and support further multidisciplinary research to analyze the special characteristics of the identified disinformation types." (Abstract)
more
"This case study examines two websites and two blogging accounts which appear to be linked to News Front, a Crimean-based news organisation previously accused of being a source of pro-Kremlin disinformation and influence operations. The sites and accounts shared stories and images in many cases iden
...
tical to those published by News Front without attribution. While the impact of the site’s activity is low, the case study provides a glimpse into the inner workings of the broader disinformation ecosystem, highlighting how particular pieces of content and narratives linked to pro-Kremlin influence can spread across the internet using multiple domains. More broadly, it sheds light on the long tail of state-linked online assets. The report shows how disinformation purveyors can use tactics like domain cloaking to avoid platform removals, suggesting a need for renewed thinking about effective forms of policy response for complex disinformation networks." (Publisher description)
more
"As of July 2021, Telegram had 550 million active users worldwide – more than the individual user bases of Twitter, Snapchat or Discord. It is the fifth most-popular messaging app after Facebook-owned Whatsapp and Messenger, and WeChat and QQ which dominate the Chinese market [...] For this paper,
...
I looked at Telegram’s policies and functionalities to help understand what made it so attractive to misinformation actors both in the Ukraine, which has a long history of Telegram engagement, and Brazil, Spain and Germany where it has had more of an impact in recent years. According to the journalists and digital researchers I interviewed about investigating misinformation and disinformation on Telegram, there are ways to address the issue, both on and off the platform: by investigating movements and their political or financial interest, by producing more responsible journalism, through clearer communication from governments, and through the continued moderation efforts on other social media platforms." (Pages 7-8)
more