"Die audiovisuelle Plattform TikTok bildet mittlerweile einen relevanten Aushandlungsort für soziale Gruppen aller Art. User*innen bietet sich die Möglichkeit, eigene Räume für Vernetzung zu schaffen und Identitätsmanagement zu betreiben. Aufgrund der Plattformaffordanzen – vor allem Multimod
...
alität –bieten sich in der Theorie die Möglichkeit, auch dissente Diskurse über die eigenen Netzwerke hinaus zu teilen. So können auch geschichtsbezogene Inhalte, die beispielsweise den Holocaust delegitimieren oder Adolf Hitler heroisieren, ein Publikum jenseits der eigenen Ingroup erreichen. Der interdisziplinär gestaltete Beitrag möchte aus geschichtswissenschaftlicher Perspektive die Praktik der vergangenheitsbezogenen Hate Speech im Spannungsfeld von (Gegen-)Öffentlichkeiten in den Blick nehmen. Für die Untersuchung des Beobachtungsgegenstands auf der Plattform selbst wurde ein explorativer Zugang mit dem thematischen Fokus auf #Holocaust gewählt. Die daraus resultierenden Feldnotizen wurden mittels einer induktiven Inhaltsanalyse bearbeitet. Im Kontext dieser Betrachtung zeigt sich, dass Nutzer*innen auf multimodale Formate der Plattform zurückgreifen, um eigene Themen zu setzen und Netzwerke aufzubauen. Es wurden drei wesentliche Ebenen ausgemacht: (1) die Ebene der Kommentare, (2) die Ebene der Videos und (3) die Ebene der vernetzenden Medienhandlungen. Es zeigt sich, dass die Grenzen zwischen Öffentlichkeit(en) und Gegenöffentlichkeit(en) auf der Plattform flexibel sind und mäandern." (Abstract)
more
"An increasing number of media users report frequent encounters with hate speech on the internet. Content moderation is only effective when the applied criteria align with users’ perceptions of hate speech. To explore what media users perceive as hate speech and which factors influence this percep
...
tion, we used a multi-methods approach. First, we conducted a survey with a representative sample of the Swiss population (N = 2000). Second, participants who reported frequent exposure to hate speech took part in a two-week mobile longitudinal linkage study, uploading screenshots and answering questions each time they encountered hate speech. We analysed N = 564 screenshots to see if they met common academic definitions of hate speech. Our findings show that impoliteness and insults are more likely to be considered hate speech when they affect one’s social identity, and that selfreports indicate higher exposure to hate speech than what was documented via screenshots." (Abstract)
more
"Seit dem Terrorangriff der Hamas auf Israel am 7. Oktober 2023 spielen soziale Netzwerke eine bedeutende und vielfach unterschätzte Rolle bei der Verbreitung von Terrorpropaganda, Falschinformationen, Israelhass, Antisemitismus und Verschwörungsnarrativen. Die Bildungsstätte Anne Frank fasst in
...
diesem Report, der im Februar 2024 publiziert wurde, die Beobachtungen relevanter Plattformen aus den ersten drei Monaten nach dem Terroranschlag in einer ad-hoc-Analyse zusammen. Der Report legt den Schwerpunkt auf TikTok – das unter Jugendlichen und jungen Erwachsenen beliebteste und reichweitenstärkste Netzwerk– und schildert die drastischen Auswirkungen des TikTok-Konsums auf die politische Meinungsbildung der jungen Zielgruppe." (https://www.bs-anne-frank.de)
more
"This paper examines the counter-violent extremism and anti-terrorism measures in Australia, China, France, the United Kingdom and the United States by investigating how governments leveraged internet intermediaries as their surrogate censors. Particular attention is paid to how political rhetoric l
...
ed to legislation passed or proposed in each of the countries studied, and their respective restrictive measures are compared against the recommendations specified by the United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression. A typology for international comparison is proposed, which provides further insights into a country’s policy focus." (Abstract)
more
"Hate is widespread online, hits everyone, and carries negative consequences. Crowd moderation—user-assisted moderation through, e. g., reporting or counter-speech—is heralded as a potential remedy. We explore this potential by linking insights on online bystander interventions to the analogy of
...
crowd moderation as a (lost) public good. We argue that the distribution of costs and benefits of engaging in crowd moderation forecasts a collective action problem. If the individual crowd member has limited incentive to react when witnessing hate, crowd moderation is unlikely to manifest. We explore this argument empirically, investigatingseveral preregistered hypotheses about the distribution of individual-level costs and benefits of response options to online hate using a large, nationally representative survey of Danish social mediausers (N = 24,996). In line with expectations, we find that bystander reactions, especially costly reactions, are rare. Furthermore, we find a positive correlation between exposure to online hate and withdrawal motivations, and a negative (n-shaped) correlation with bystander reactions." (Abstract)
more
"This investigation employs the analytical framework established by Braddock and Horgan to conduct a comprehensive content analysis of 79 official English-language propaganda videos disseminated by ISIS, with the objective of quantifying the thematic composition and the evolutionary trajectory of IS
...
IS’s international media operations and propaganda machinery from 2014 to 2017. The findings reveal that a predominant portion of the videos articulate narratives extensively centred around themes of the adversary and religious discourse, with the Sharia (Islamic law) emerging as the most prevalent theme. This research concludes that at a global scale, the propaganda apparatus of ISIS has orchestrated an intricate narrative, incorporating adversarial, theological, and emotional elements, thereby delineating the advanced sophistication of ISIS’s global propaganda endeavours." (Abstract)
more
"The article builds on current research into the effects and harms of hate speech in the lives of its victims. It introduces the anthropological concept of everyday violence to focus on hate speech as an everyday experience as opposed to a sequence of separate hate speech acts. Methodologically, the
...
study is based on a qualitative approach and analyses data collected via semi-structured interviews (N=33) with people who have experienced hate speech in four EU member states (Italy, Germany, the Czech Republic and Portugal). The analysis documents four overlapping themes of how hate speech manifests as the everyday experience of “living hated”—hate speech as a flow; its spatial dimension of moving across online and offline contexts; its long-term effects, leading to what we call “cumulative desensitization” (aggravated during the COVID-19 pandemic); and the role of support systems and their (in)effectiveness. The article concludes by suggesting possible applications as well as avenues for future research that could provide a deeper understanding of hate speech as the daily life experience of its targets." (Abstract)
more
"Content moderation algorithms influence how users understand and engage with social media platforms. However, when identifying hate speech, these automated systems often contain biases that can silence or further harm marginalized users. Recently, scholars have offered both restorative and transfor
...
mative justice frameworks as alternative approaches to platform governance to mitigate harms caused to marginalized users. As a complement to these recent calls, in this essay, I take up the concept of reparation as one substantive approach social media platforms can use alongside and within these justice frameworks to take actionable steps toward addressing, undoing and proactively preventing the harm caused by algorithmic content moderation. Specifically, I draw on established legal and legislative reparations frameworks to suggest how social media platforms can reconceptualize algorithmic content moderation in ways that decrease harm to marginalized users when identifying hate speech. I argue that the concept of reparations can reorient how researchers and corporate social media platforms approach content moderation, away from capitalist impulses and efficiency and toward a framework that prioritizes creating an environment where individuals from marginalized communities feel safe, protected and empowered." (Abstract)
more