Document details

Self-regulation and ‘hate speech’ on social media platforms

London: Article 19 (2018), 31 pp.
"A number of recent legislative initiatives on ‘hate speech’, including most prominently the 2017 German NetzDG law on social media, make reference to some forms of self-regulation. Voluntary mechanisms between digital companies and various public bodies addressing ‘hate speech’ and other issues, such as the EU Code of Conduct on hate speech, also make reference to self-regulatory models. However, our analysis of these mechanisms demonstrates that they fail to comply with the requirements of international human rights law. They rely on vague and overbroad terms to identify unlawful content, they delegate censorship responsibilities to social media companies with no real consideration of the lawfulness of content, and they fail to provide due process guarantees. ARTICLE 19 therefore suggests exploring a new model of effective self-regulation for social media. This model could take the form of a dedicated “social media council” – inspired by the effective self-regulation models created to support and promote journalistic ethics and high standards in print media. We believe that effective selfregulation could offer an appropriate framework through which to address the current problems with content moderation by social media companies, including ‘hate speech’ on their platforms, providing it also meets certain conditions of independence, openness to civil society participation, accountability and effectiveness." (Executive summary)