Countering hate speech online, disinformation and content moderation
A 2018 UN report on content regulation in response to “fake news”, disinformation and online extremism to the Human Rights Council of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, examined the regulation of user-general online content. The Special Rapporteur made five recommendations:
- repeal laws that criminalises or unduly restrict expression online and offline
- smart regulation as a norm, protected by an independent judicial authority and refrain from sanctions on internet intermediaries that have a chilling effect
- protect privacy and prevent censorship by refraining from pro-active filtering and monitoring
- judicial authorities rather than government agencies or companies should become arbiters of lawful expression
- involve public input in regulatory considerations and provide detailed transparency on all content-related requests issued to intermediaries
The African Commission on Human and People’s Rights’ 2019 declaration on freedom of expression and access to information in Africa mentioned above, includes ‘calls for multi-stakeholder engagement to settle key governance questions and for both states and private companies to adhere to a strict human rights–based approach in designing content moderation policies.’ (source)
To prevent and counter the spread of illegal hate speech online, since 2016, the European Commission agreed with several large online platforms, a “Code of conduct on countering illegal hate speech online”, which, according to the EU, shows that ‘on average the companies are now assessing 90% of flagged content within 24 hours and 71% of the content deemed illegal hate speech is removed’.
A critical analysis of the Code of Conduct developed by Article19, outlines how it is reflected on national level in light of proposals for new regulatory systems for content moderation. Among its finding was found that there is:
- a broad definition of “illegal hate speech”
- no commitment to freedom of expression
- lack of due process guarantees
- propensity to promote censorship
Along with this Code of Conduct, a separate Code of Practice on Disinformation was adopted. The same signatories agree, on a voluntary basis, to self-regulatory standards to fight disinformation through providing transparency in political advertising, to close fake accounts and demonetise purveyors of disinformation.
The Council of Europe has adopted policy guidelines on the roles and responsibilities of internet intermediaries, which i.a. underline transparency of all processes of content moderation.
April 2020, the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, presented a report on disease pandemics and the freedom of opinion and expression to the Human Rights Council. The Special Rapporteur raised concerns that some national and international measures to combat the COVID-19 pandemic may be failing to meet international human rights standards related to freedom of expression. The Special Rapporteur raised five areas of concern:
- limitations to the right to access, impart and receive information
- restrictions to Internet access
- threats to journalism
- public health disinformation
- an increasing use of surveillance tools
A discussion on disinformation can be found in the Accountability Resource Guide on this website. Current regulatory discussions on disinformation have been given an increased sense of urgency due to the dis- and misinformation being spread about COVID-19. See also the Resource Guide on Media and COVID-19.
Regulating recommendation and curating systems
Digital platforms employ algorithmic methods to recommend and curate news to optimise engagement with their users. As these algorithms continuously evolve, news organisations struggle, or in the case of small media outlets, fail to keep pace with these changes. This seriously influences the distribution and consumption of news and undermines the ability of news outlets to monetise their content.
Read more on recommendation and curating algorithms:– Section ‘The impact of technology’ (pp.46-71), in The Impact of Digital Platforms on News and Journalistic Content (2018)
– Use of AI in Online Content Moderation (2019)
– Exposure diversity as a design principle for recommender systems (2018)
The lack of transparency in the way that online platforms select and rank news and the responsibility that is bestowed upon them as primary gateways for news dissemination, combined with their routing of disinformation and other false and extreme content to customers has created an increasing attention for regulating their operations.
In 2019, the study A governance framework for algorithmic accountability and transparency of the European Parliament Panel for the Future of Science and Technology was published. It discusses and makes recommendations for policy options which include:
‘4.3 Regulatory oversight and Legal liability
- The creation of a regulatory body for algorithmic decision-making tasked with:
- Establishing a risk assessment matrix for classifying algorithm types and application domains according to potential for significant negative impact on citizens
- Investigating the use of algorithmic systems where there is a suspicion (e.g. evidence provided by a whistleblower) of infringement of human rights
- Advising other regulatory bodies regarding algorithmic systems as they apply to the remit of those agencies
- That systems classified as causing potentially severe non-reversible impact be required to produce an Algorithmic Impact Assessment, similar to public sector applications
- That systems with medium severity non-reversible impacts require the service provider to accept strict tort liability, with a possibility of reducing the liability by having the system certified as compliant with (as yet to be determined) best-practice standards
4.4 Global coordination for algorithmic governance
- The establishment of a permanent global Algorithm Governance Forum (AGF) for multi-stakeholder dialog and policy expertise related to algorithmic systems, and associated technologies. Based on the principles of Responsible Research and Innovation, the AGF would provide a forum for coordination and exchanging of governance best-practices related to algorithmic decision-making.
- The adoption of a strong position in trade negotiations to protect regulatory ability to investigate algorithmic systems and hold parties accountable for violations of European laws and human rights’
A discussion on diversity in news recommenders and governance of algorithmic recommendation systems clearly highlights the issue and the differences in perspective in the USA as compared to Europe, when it comes to primacy of either the First Amendment or of the need for governments to stepg in and creating regulatory oversight.
Digital Shelves Initiative
The Digital Shelves initiative is project of the The Netherlands Network for Human Rights Research’s Working Group on Human Rights in the Digital Agea. It comprises of a set of curated reading lists for people interested in freedom of expression, media and internet freedoms, privacy and data protection, artificial intelligence and intellectual property and is currently being expanded.