Self-regulation and hate speech on social media platforms

Self-regulation and ‘hate speech’ on social media platforms is a briefing paper by Article 19 discussing their preferred model for regulation of social media platforms. It seeks to contribute to the debate on "greater regulation of social media platforms, including calls for such platforms to be considered publishers. We do so by exploring a possible model for the independent and effective self-regulation of social media platforms."[1]

Main proposal: social media council

ARTICLE 19 therefore suggests exploring a new model of effective self-regulation for social media. This model could take the form of a dedicated “social media council” – inspired by the effective self-regulation models created to support and promote journalistic ethics and high standards in print media. We believe that effective self- regulation could offer an appropriate framework through which to address the current problems with content moderation by social media companies, including ‘hate speech’ on their platforms, providing it also meets certain conditions of independence, openness to civil society participation, accountability and effectiveness. Such a model could also allow for adoption of tailored remedies without the threat of heavy legal sanctions.[2]

Rationale

The social media council proposal is examined in the context of hate speech. Article 19 say that it could help elaborate

ethical standards for social media platforms in general, provide remedies, and identify ways to enable exercise of the right to freedom of expression on social media platforms.[3]

Models of regulation

Article 19 explain that regulation, co-regulation and self-regulation have been applied to different kinds of media:

Statutory or co-regulation models have traditionally been deemed necessary for the broadcast media, wherein the allocation of a scarce natural resource (spectrum) requires the intervention of public authorities in order to create a diverse and pluralistic broadcasting landscape. Self-regulation has been considered the preferred approach for print media; press councils are the typical example of such mechanisms. Self-regulation is considered to be the least restrictive means available through which the press can be effectively regulated and the best system through which high standards in the media can be promoted.[4]

They acknowledge that there are substantial problems with self-regulation;

References to forms of self-regulation as the appropriate approach to deal with content moderation on social media are abundant in recent initiatives in this area, both in the EU and in many States outside of it. However, as discussed in this brief, the proposed mechanisms do not necessarily offer sufficient guarantees for either the independence or effectiveness of self-regulation, or for the protection of freedom of expression.[5]

Press councils

Article 19 set out the main features of press councils as a parallel example of self-regulation:

Self-regulation of the press typically means some form of national or regional press council, complaints commission or ombudsperson (either acting alone or in conjunction with a press council). Press councils may be funded by the publishing industry alone, by journalists alone or by a combination of both, and sometimes with government assistance (for example, financial assistance).

Press councils publish their codes of conduct with the approval of journalistic and media organisations. Crucially, the press outlets of the country that are members of the press council must commit themselves to these codes of conduct. Sometimes broadcasting organisations do so as well.

Press councils accept complaints from any member of the public who believes that a published article infringes the respective code of conduct. The members of the press council (or, in some cases a complaints committee of the press council) will then adjudicate on complaints received, publish their conclusions and, in some cases, order the publication of their decision or impose a right of reply on the offending outlet. In very few cases, for particularly serious breaches of the code of conduct, press councils can impose financial penalties.

Some press councils comprise only of representatives of its member media organisations, while others give representation to the wider community and include a balanced representation of publishers, journalists and the public.

Many press councils see the task of hearing complaints as part of a wider responsibility to defend media freedom. These bodies often publish an annual review discussing media concerns and sometimes advocate for legislative changes related to the media. Others see their role solely as a complaints body.[6]

Criteria for self-regulatory bodies

They identify a number of features necessary for self-regulatory bodies to function properly and with confidence:

ARTICLE 19 has previously identified several requirements for effective self-regulation of the media. We submit that sector-wide effective self-regulatory bodies should:

  • Be independent from government, commercial and special interests;
  • Be established via a fully consultative and inclusive process – the major constitutive elements of their work should be elaborated in an open, transparent, and participatory manner that allows for broad public consultation;
  • Be democratic and transparent in their selection of members and decision-making;
  • Ensure broad representation. It is important that the independence of self- regulatory bodies is ensured, with a composition that includes representatives of civil society;
  • Adopt a code of ethics for the profession or sector it seeks to regulate;
  • Have a robust complaints mechanism and clear procedural rules to determine if ethical standards were breached in individual cases, and have the power to impose only moral sanctions; and
  • Work in the service of the public interest, be transparent and accountable to the public.[7]

Limits on government action

A limited degree of state support can be useful in supporting the creation of effective self-regulatory mechanisms, provided that state intervention is limited to creating a legal underpinning for self-regulation and does not threaten the independence of the self-regulatory bodies. By contrast, situations whereby public authorities pressure private companies to define and regulate speech, under the guise of self-regulation or co-regulation, are seriously at odds with international standards on freedom of expression.[8]

Discussion points for a self-regulatory body

To initiate discussions on this new self-regulatory model for social media platforms, ARTICLE 19 suggests that the following issues should be considered when exploring this mechanism:

  • Remit: A Social Media Council (Council) could either be tasked with dealing with a specific issue (such as ‘hate speech’) or be given general jurisdiction over content issues on the social media platforms that are members of the Council;
  • Scope: It could be created on a national level to ensure a sufficient level of proximity and understanding of the relevant community and context, or on an international level or a combination of both;
  • Independence: The Council would have to be independent from any particular social media company and should include representatives from all relevant stakeholders, such as media associations, media regulatory bodies, freedom of expression experts, academia and civil society. In order to avoid an excessive number of representatives, its composition could vary according to areas of intervention;
  • Commitments: Social media platforms would have to commit to providing an appropriate level of information on their internal content moderation practices to the Council of which they are a member. They would also have to commit to accepting the decisions of their Council as binding;
  • Charter of ethics/Code of conduct: As a fundamental part of its remit, Councils would have to adopt a Charter of Ethics for social media. This document would have to be adopted through a transparent and open process, including broad consultations with all relevant stakeholders, including civil society organisations. At a minimum, a Charter of Ethics would include a commitment to comply with international human rights standards, including on freedom of expression and due process;
  • Decision-making: The Council could adopt recommendations, – either of their own initiative or at the request of its members – to further clarify the interpretation and application of ethical standards in given areas. Such recommendations would have to be adopted through a transparent process, open to participation from all relevant stakeholders and civil society. For instance, Councils could adopt a recommendation on how to include robust notice and counter-notice procedures in social media platforms’ terms and conditions;
  • Complaints procedures: The Council could be empowered to receive complaints from individual users, provided that all possibilities of remedying the issue with the social media company (either through ombudspersons or other flagging procedures) have already been exhausted. The Council would hold a hearing and reach a decision, including the possibility of a sanction that seeks to promote rather than restrict speech (such as a right of reply, an apology or the publication of its decision);
  • Other functions: The Council could also be tasked with providing advice on ethical standards to social media platforms’ own ombudspersons, staff, and departments in charge of content regulation;
  • Funding: The Council would have to benefit from a stable and appropriate level of funding to ensure its independence and capacity to operate. Social media platforms would have to commit to providing at least part of its income on a multi- annual basis, while additional resources could be provided by other stakeholders or philanthropic organisations; and
  • Accountability: The Council would have to ensure its accountability to the public. In particular, it would have to make its work and decisions readily available to the public – including, of course, through social media.[9]

Current self-regulation

ARTICLE 19 finds that in general, content moderation and removal policies by social media companies are problematic for several reasons, including:

  • Lack of respect for human rights standards: Although social media companies, as private businesses, are not directly bound by international human rights law, they are increasingly encouraged to implement international standards on freedom of expression in all their practices related to content moderation. Available information shows that the internal content moderation policies of some companies address complex and varied factors and may include a certain degree of consideration for freedom of expression and other fundamental rights. It is, however, also clear that certain decisions to suppress content are in violation of freedom of expression standards;
  • Lack of legal certainty: The removal of content on social media platforms is entirely unilateral and the process does not respect the requirements of due process of law. ARTICLE 19 has previously recommended that, as a matter of principle, social media companies and all hosting service providers should only be required to remove content following an order issued by an independent and impartial court or other adjudicatory body that has determined that the material at issue is unlawful. This is because the courts apply laws that have been democratically adopted, under all the guarantees of due process of law. Through modalities that vary from country to country, they are also bound to apply human rights standards, on the basis of national, regional, and international law, to the cases they preside over. This provides a much greater degree of legal certainty. We recognise, however, that it may be too burdensome and costly for the courts to examine all applications for content removal, given the high volume of such requests. However, at a minimum, procedures set up by social media companies should fully comply with certain basic due process requirements;
  • Lack of accountability or transparency over decision-making: The majority of the decision-making processes and practices of social media platforms underpinning their content policies – including the use of automated decision-making processes – remain opaque. Existing procedures do not ensure sufficient accountability for those decisions. Although some progress has been made with regards to transparency reporting over the years, there is still too little information available about the way in which social media platforms apply their Terms of Service in various circumstances. It has been acknowledged that this lack of transparency with regards to their decision-making processes can obscure discriminatory practices or political pressure affecting the companies’ practices;
  • Lack of consistency in stakeholder engagement: Many of the initiatives undertaken by social media platforms in this area need more meaningful participation of civil society organisations and other stakeholders. It has been repeatedly suggested that practices and decision-making regarding content on dominant social media platforms is in the public interest, and it has been highlighted that “the shaping of those policies might be more effective if done through collective knowledge and debate.” Experience has shown that dominant social media platforms can be sensitive to public outrage and, when faced with large-scale protests, are willing to reconsider their decisions to remove specific content. This has generally occurred in cases where legitimate content has been removed on the basis of community guidelines or Terms of Service, in particular content depicting nudity or violence, and the removal of the content has generated a significant public backlash. However, there are many instances whereby lawful content, removed by these platforms, is not defended by a massive public mobilisation and is not supported by government representatives or celebrities. The lack of consistency by social media platforms is additionally increasingly being driven by pressures exerted by advertisers, who wish to avoid their brand’s image being tarnished through association with certain types of content.[10]

Current government policies for self-regulation

Regulated self-regulation

There have been some initiatives to establish regulated self-regulatory agencies for social media.

This mechanism has been adopted in Germany, under the 2017 Network Enforcement Act (NetzDG). The NetzDG threatens social networks with a fine of up to 50 million EUR if they do not remove “clearly illegal” content within 24 hours of a complaint (or a week when it is not clear that the content is illegal).

The NetzDG also provides for the recognition, by the Ministry of Justice, of “regulated self-regulatory agencies.” The role of such agencies, which would be financed by social media companies, will be to determine whether a given piece of content is in violation of the law and should be removed from the platform. Recognition by the Ministry of Justice is contingent on conditions such as the independence of the self-regulatory agency, the expertise of the agency staff who would act as decision-makers, and the agency’s capacity to reach a decision within seven days. In theory, this mechanism might be a step towards establishing some sort of independent self-regulation for social media under the statute. This might seem like a step towards establishing a self-regulatory mechanism for social media platforms, underpinned by legal statute. ARTICLE 19, however, finds that the guarantees provided for in the NetzDG are insufficient to ensure the independence and effectiveness required for an effective model of self-regulation and for the protection of freedom of expression.[11]

Codes of conduct

References

  1. Self-regulation and ‘hate speech’ on social media platforms, article19.org March 2018
  2. Ibid, pp4-5
  3. Ibid, p7
  4. Ibid, p9-10
  5. Ibid, p6
  6. Ibid, p1o
  7. Ibid, pp11-12
  8. Ibid, pp11-12
  9. Ibid, pp21-22
  10. Ibid, pp15-16
  11. Ibid, p18