Government Response to the Internet Safety Strategy Green Paper/Draft code of practice for providers of online social media platforms

< Government Response to the Internet Safety Strategy Green Paper
See also: Draft transparency reporting template

Annex B – Draft code of practice for providers of online social media platforms[1]

This code provides guidance as required by the Digital Economy Act 2017 section 103. It also provides additional guidance, not required by the Act.

[Code of Practice under section 103]

This code of practice applies to conduct which -

a. Is engaged in by a person online;

b. Is directed at an individual; and

c. Involves bullying or insulting, or other behaviour likely to intimidate, humiliate the individual (referred to as ‘abuse’ for the remainder of this document).

This code does not affect how unlawful conduct is dealt with.

The code of practice may be revised from time to time.

1. “Social Media Providers” should maintain a clear and transparent reporting process to enable individuals to notify providers of the use of their platforms for the conduct set out above. This should include:

  • Capacity for users to report content or conduct which breaches the service’s terms and conditions;
  • Capacity for users to report abuse targeting gender, transgender identity, disability, race, sexual orientation, religion and political views;
  • Guidance for users to report content or conduct which may potentially breach UK law to the relevant authorities;
  • Capacity for reports by users who self-identify as under 18 to be handled appropriately;
  • Implement processes for reviewing user reports about images, videos, text and other content or conduct;
  • Making available, and visible to users, information on report and review procedures;
  • Tools to block or mute the user who has uploaded abusive content, so that they can no longer see posts or have a conversation with the victim;
  • Moderating processes which are resourced to match the platform’s user base;
  • Links to reporting options in appropriate places on the platform so they are regularly seen by the user;
  • Clear age guidelines, including as standard, a mechanism for users to report underage use where they suspect it is taking place;
  • Scope for the testing and improving of reporting mechanisms based on user feedback and as new products/ features are developed;
  • Links for users to access appropriate off-platform support for a range of issues: crime, bullying, mental and physical health and wellbeing, suicide and self-harm;
  • Appropriate mental health and wellbeing training and support in place for all moderators.

Best practice examples of the principle:

  • A triage system to deal with content reports;
  • Capacity to report multiple incidence of abuse;
  • Capacity for non-users to report abusive content/conduct and potentially harmful content, for example parents and teachers to report on behalf of young people;
  • The option of in-line reporting: reporting buttons on the actual content/ conduct that the user might want to report;
  • Reporting streams for complaints where abuse targets one or more of the protected characteristics set out in the Equality Act 2010;
  • Prioritisation of reports concerning: suicide or self-harm content or behaviour, credible threats and child safety (under 18 users);
  • Flagging users to websites and public referral tools to report content or conduct which may potentially breach UK law so law enforcement and industry can take appropriate action. These include the Counter Terrorism Internet Referral Unit, the True Vision hate crime reporting website and the Internet Watch Foundation (IWF);
  • Using a mix of human and machine moderation with minimum training standards and guidance for moderators of online content;
  • Establishment of effective single points of contact within the company for law enforcement agencies;
  • Tools to unsubscribe or "un-follow" accounts that produce or share offensive material;
  • Inclusion of relevant professionals and users when designing new safety policies;
  • Use of technological tools which prevent users who have been blocked from the platform from attempting to return.

  2. “Social Media Providers” should maintain processes for dealing with notifications from users. This should include:

  • Providing information about how reports are dealt with and how the outcome will be communicated, including expected timeline;
  • The removal of content reported on a ‘comply or explain’ basis - processes should notify users about the outcome of a report, when action has been taken, if further information is needed and when no further communication will be provided;
  • Sending users an acknowledgement that their report has been successfully received within 24 hours. The acknowledgement should further information on the reporting process including: a commitment to act on reports relating to abuse within a certain number of hours and the expected resolution timescales;
  • Support information which is accessible and in one place for users, e.g in a ‘safety centre’ (or equivalent).

  Best practice examples of the principle:

  • Providers consider the most effective way of communicating with users about their reports - this may include using different communication channels or adapting the language used for younger users;
  • Providers work effectively with trusted flaggers including charities and other user support organisations such as the Revenge Porn helpline;
  • Additional support including signposting to other sources of guidance is used to help users deal with complex issues, including mental health concerns;
  • Flagging privacy options to users as part of  the reporting process;
  • Offering an appeals process to users who disagree with the platforms’ decisions on content removal;
  • Ability for non-users to report abusive content/conduct, for example parents and teachers to report on behalf of young people;
  • Providers anticipate when increased reporting may occur, such as during election periods, and ensure that appropriate resources are available to deal with reports in a timely manner.

  3. “Social Media Providers” should include provisions about the above matters in their terms and conditions.1 Terms and conditions should:

  • Be underpinned by the principle that what is unacceptable offline is also unacceptable online, with recourse to UK law;
  • Be clearly written and easy to understand;
  • Include consequences for users in relation to the violation of terms and conditions;
  • Include or link to information on:
    • Acceptable user content and conduct examples;
    • Provision to tackle abusive behaviour;
    • Respecting the rights of others;
    • Actions taken to tackle anonymous abuse;
    • Action against content that is been removed and reuploaded;
    • Provision to terminate accounts which are used to abuse others;
    • Where behaviour may amount to a criminal offence;
    • Make reference/ link to an explanation of how community guidelines are developed, enforced and reviewed including information on performance metrics on take-down;
    • Provision for user privacy settings including the ability to make a profile not visible to the public.

  Best practice examples of the principle:

  • Policies, including the terms and conditions, are expressed in plain English and can be understood by users of all ages;
  • Policies about acceptable user conduct and content are accessible and may be reproduced separately from the main terms and conditions of a platform, for example as part of community standards;
  • Regular reminders of the policies should be presented in engaging formats across different typical user journeys.

4. “Social Media Providers” should give clear explanations to the public about the action taken against the above specified conduct:

  • Users should be made aware of the prevention, identification and consequences of behaviour which is contrary to the policies of the platform. This should include strategies for users who persistently engage in abusive behaviour or behaviour which may promote risky and dangerous behaviour, intentional self-harm or damage other users’ mental health and wellbeing;
  • Platforms should give users explanations on a ‘comply or explain’ basis if content remains after they have reported it;
  • Platforms should consistently enforce the consequences of misconduct as detailed in their policies.

  Best practice examples of the principle:

  • Platforms provide education on appropriate online conduct to all users, especially those who breach the platform policies;
  • Information about the consequences of misuse is incorporated in the regular reminders of the platform’s policies.

[Additional Guidance not required under section 103]

The above code of practice also applies to conduct which is directed at groups and businesses. Examples of ‘groups’ include supporters of a football team, pupils of a school, people from a particular town, supporters of a particular political party.

In addition to the types of abuse detailed above, the code also applies to conduct which negatively impacts mental health and wellbeing. The code of practice will encourage the use of technology to identify potentially harmful online content and behaviours.

In addition to this code, other guidance relevant to “social media providers” includes:

  • the Information Commissioner’s existing and future guidance including the upcoming Age Appropriate Design Code;
  • the UK Council for Child Internet’s Safety Child Safety Online - A practical guide for providers of social media and interactive services.

References

  1. Full text of Annex B reproduced from Government Response to the Internet Safety Strategy Green Paper, gov.uk May 2018