"This joint report from OpenMinds and the Digital Forensic Research Lab (DFRLab) analyzes the activity of a network of 3,634 automated accounts that posted pro-Russian comments on Telegram channels between January 2024 and April 2025 targeting Ukrainian populations inside Russian-occupied territorie
...
s. These accounts crafted tailored messages to target the occupied territories, differentiating their content from that aimed at other audiences across Russia and Ukraine. Our investigation found that automated Telegram comments targeting local audiences in Ukraine fell into three overarching categories: pro-Russian propaganda, anti-Ukrainian propaganda, and abstract anti-war messaging and calls for peaceful coexistence. Individual narratives were often tailored to respond to current events and changes in local conditions, such as power or water outages, but there was also evidence of proactive narratives initiated by the network unrelated to external events.
The bot network used similar messaging when targeting channels based in Russia; however, a significantly larger share of comments targeting the occupied territory channels emphasized positive portrayals of Russian culture and government. Across the sixty-nine narrative themes identified (see appendix), the bot network pushed essentially the same menu of talking points in both Russia-wide and occupied territory channels. What differentiates the content aimed at the occupied territories from that aimed at a wider Russian audience is the proportion of talking points: themes that praised Russian culture, social services, and governance dominated in occupied territory-based channels, accounting for a markedly higher share of content than in Russia-based channels. The pattern points to an effort to cultivate the appearance of local consensus in favor of occupation and Russian administrative control, but not necessarily to create genuine agreement." (Executive summary)
more
"This report compares two large random samples of Twitter accounts that tweet in English: one taken just before Elon Musk acquired Twitter in October 2022, and one taken three months later, in January 2023. It also examines several related datasets collected during the period following the acquisiti
...
on, a period in which, the study found, new accounts were created at a record-breaking pace. Some extremist and conspiracy networks created accounts faster than the baseline rate, probably because changes to Twitter’s trust and safety policies had been announced. In the context of these policy announcements, the study examines some reinstated accounts, with mixed results. Despite the loosening of several content policies, accounts that automated the sending of tweets (‘bots’) saw activity drop sharply during the period of the study, with many bot accounts being suspended or deactivated, while others voluntarily curtailed their activity in light of the API changes announced. Deactivated accounts were dominated by sex-related content and apparent financial spam or scams, often coupled with automated tweeting." (Abstract)
more
"In this paper, we identified seven most widely spread conspiracy discourses about earthquakes. These conspiracy discourses link earthquakes to military activities like secret nuclear bomb testing, God’s Providence like the punishment of humans for their sins, space activities like aliens visiting
...
our planet, the US secret weather control program HAARP, tests of the Large Hadron Collider, fracking projects, and freemasonic plots. Following the major earthquake in Indonesia at the end of November 2022, we extracted data from Twitter by keywords using the Hoaxy tool for tracking the spread of information on Twitter. Applying the Bot Sentinel tool, we also got data on the sentiment of the users. The divine and military discourses dominated the conspiracy discussion, followed by the discussions about extraction and HAARP. Though there were more human-like accounts than bot-like accounts, we found a positive correlation between the frequency of tweets on the conspiracy discourses and the bot scores of the accounts, which suggests that bot-like accounts were tweeting more than human-like accounts. It was also found that normal accounts tweeted more than toxic accounts, and there was a positive relationship between the bot score and the toxicity level of an account. It suggests that bot-like accounts were involved more in disruptive activities than human-like accounts." (Abstract)
more
"In diesem Buch werden in 18 Kapiteln Phänomene der Kommunikation im digitalen Zeitalter vorgestellt und die psychologischen Hintergründe anhand von wissenschaftlichen Theorien und Analysen erörtert. Viele Menschen sind verunsichert durch eine Medienwelt, die unausgewogen erscheint (Fake News, L
...
genpresse), in der man durch Clickbait manipuliert wird und in der Social Bots bald nicht mehr einfach von echten Chatpartnern zu unterscheiden sind. Abseits aller Hysterie erhalten die Leserinnen und Leser Einblick in faszinierende Erkenntnisse der Psychologie, mit deren Hilfe sich die schillernden Phänomene der Kommunikation im digitalen Zeitalter besser verstehen lassen." (Verlagsbeschreibung)
more
"Artificially intelligent “bot” accounts attack politicians and public figures on social media. Conspiracy theorists publish junk news sites to promote their outlandish beliefs. Campaigners create fake dating profiles to attract young voters. We live in a world of technologies that misdirect our
...
attention, poison our political conversations, and jeopardize our democracies. With massive amounts of social media and public polling data, and in-depth interviews with political consultants, bot writers, and journalists, Philip N. Howard offers ways to take these “lie machines” apart. 'Lie Machines' is full of riveting behind-the-scenes stories from the world’s biggest and most damagingly successful misinformation initiatives—including those used in Brexit and U.S. elections. Howard not only shows how these campaigns evolved from older propaganda operations but also exposes their new powers, gives us insight into their effectiveness, and explains how to shut them down." (Publisher description)
more
"Computational propaganda is an emergent form of political manipulation that occurs over the Internet. The term describes the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with the manipulation of public opinion. Our research shows that this new mode of int
...
errupting and influencing communication is on the rise around the globe. Advances in computing technology, especially around social automation, machine learning, and artificial intelligence mean that computational propaganda is becoming more sophisticated and harder to track at an alarming rate. This introduction explores the foundations of computational propaganda. It describes the key role that automated manipulation of algorithms plays in recent efforts to control political communication worldwide. We discuss the social data science of political communication and build upon the argument that algorithms and other computational tools now play an important political role in areas like news consumption, issue awareness, and cultural understanding. We unpack the key findings of the nine country case studies that follow—exploring the role of computational propaganda during events from local and national elections in Brazil to the ongoing security crisis between Ukraine and Russia. Our methodology in this work has been purposefully mixed, we make use of quantitative analysis of data from several social media platforms and qualitative work that includes interviews with the people who design and deploy political bots and disinformation campaigns. Finally, we highlight original evidence about how this manipulation and amplification of disinformation is produced, managed, and circulated by political operatives and governments and describe paths for both democratic intervention and future research in this space." (Abstract)
more
"Our analysis of millions of Russian tweets over 2014-2018 reveals that bots make up a large proportion of the Russian political Twittersphere. However, an important lesson from our region is that one cannot assume that simply because there are bots present in the Russian political Twittersphere tha
...
t they are pro-Kremlin. Instead, as it turns out, pro-opposition, pro-Kiev, and neutral bots proliferate as well. We therefore also developed machine learning models that allow us to distinguish between three major groups of political bots in Russia at scale, including pro-Kremlin, pro-opposition, and pro-Kyiv bots. It is worth noting, though, that the fourth residual category of bots that we call neutral actually make up a plurality of these bot-orientation types. Our preliminary analysis of bot activity shows that across the entire data set, bots mainly seem to be being used to amplify political messages. In the case of neutral bots, amplification is conducted via tweeting repetitive texts, whereas non-neutral bots achieve this via retweeting. It appears that the sources of retweets from Russian political bots are either mass media with strong political orientation or prominent political figures. Exciting topics for future research would include more deeply diving into the topics of the messages shared by bots, better understanding whether the target audience for these shared messages are humans or other computer algorithms (e.g., to influence search rankings), and testing hypotheses related to over-time variation in the use of political bots, both in Russia and beyond." (Conclusion)
more