Online Harms white paper

See also: Digital Charter; Internet Safety Strategy

An Online Harms white paper was published in April 2019 by DCMS and the Home Office. As expected, the core of the proposal is a 'Duty of Care' for social media companies, covering illegal and 'harmful' content, backed by a regulator and sanctions.

The scope of the proposal is very wide, encompassing nearly any online communication tool.

Content

Scope: what is regulated

The scope of what is to be regulated is very wide.

  • The regulatory framework will apply to companies that provide services or tools that allow, enable or facilitate users to share or discover user-generated content, or interact with each other online.
  • These services are offered by a wide range of companies, including start-ups and SMEs, and other organisations such as charities[1] ...

30. These services are offered by a very wide range of companies of all sizes, including social media platforms, file hosting sites, public discussion forums, messaging services and search engines[2] ....

4.3 ... companies of all sizes will be in scope of the regulatory framework. The scope will include companies from a range of sectors, including social media companies, public discussion forums, retailers that allow users to review products online, along with non-profit organisations, file sharing sites and cloud hosting providers. [3] ...

4.1 Harmful content and behaviour originates from and migrates across a wide range of online platforms or services, and these cannot readily be categorised by reference to a single business model or sector. Focusing on the services provided by companies, rather than their business model or sector, limits the risk that online harms simply move and proliferate outside of the ambit of the new regulatory framework.[4]

4.2 There are two main types of online activity that can give rise to the online harms in scope or compound their effects:

  • Hosting, sharing and discovery of user-generated content (e.g. a post on a public forum or the sharing of a video).
  • Facilitation of public and private online interaction between service users (e.g. instant messaging or comments on posts).[5]

The proposal seems to have stepped away from attempting to regulating encrypted communications for content, although such channels are not out of scope:

4.7 Reflecting the importance of privacy, the framework will also ensure a differentiated approach for private communication, meaning any requirements to scan or monitor content for tightly defined categories of illegal content will not apply to private channels.[6]

Scope: harms definitions

Table 1: Online harms in scope [7]

Harms with a clear definition: Harms with a less clear definition: Underage exposure to legal content:
  • Child sexual exploitation and abuse.
  • Terrorist content and activity.
  • Organised immigration crime.
  • Modern slavery.
  • Extreme pornography.
  • Revenge pornography.
  • Harassment and cyberstalking.
  • Hate crime.
  • Encouraging or assisting suicide.
  • Incitement of violence.
  • Sale of illegal goods/ services, such as drugs and weapons (on the open internet).
  • Content illegally uploaded from prisons.
  • Sexting of indecent images by under 18s (creating, possessing, copying or distributing indecent or sexual images of children and young people under the age of 18).
  • Cyberbullying and trolling.
  • Extremist content and activity.
  • Coercive behaviour.
  • Intimidation.
  • Disinformation.
  • Violent content.
  • Advocacy of self-harm.
  • Promotion of Female Genital Mutilation (FGM).
  • Children accessing pornography.
  • Children accessing inappropriate material

(including under 13s using social media and under 18s using dating apps; excessive screen time).


2.4 The following harms will be excluded from scope:

  • All harms to organisations, such as companies, as opposed to harms suffered by individuals. This excludes harms relating to most aspects of competition law, most cases of intellectual property violation, and the organisational response to many cases of fraudulent activity. The government is leading separate initiatives to tackle these issues. For example, the Joint Fraud Taskforce is leading an ambitious programme of work to tackle fraud, including online fraud, through partnership between banks, law enforcement and government.
  • All harms suffered by individuals that result directly from a breach of the data protection legislation, including distress arising from intrusion, harm from unfair processing, and any financial losses. Box 16 explains how the UK’s legal framework provides protection against online harms linked to data breaches.
  • All harms suffered by individuals resulting directly from a breach of cyber security or hacking. These harms are addressed through the government’s National Cyber Security Strategy.
  • All harms suffered by individuals on the dark web rather than the open internet. These harms are addressed in the government’s Serious and Organised Crime Strategy. A law enforcement response to criminality on the dark web is considered the most effective response to the threat. As set out in the strategy, the government continues to invest in specialist law enforcement skills and capability.

Vision for the Internet

12. Our vision is for:

  • A free, open and secure internet.
  • Freedom of expression online.
  • An online environment where companies take effective steps to keep their users safe, and where criminal, terrorist and hostile foreign state activity is not left to contaminate the online space.
  • Rules and norms for the internet that discourage harmful behaviour.
  • The UK as a thriving digital economy, with a prosperous ecosystem of companies developing innovation in online safety.
  • Citizens who understand the risks of online activity, challenge unacceptable behaviours and know how to access help if they experience harm online, with children receiving extra protection.
  • A global coalition of countries all taking coordinated steps to keep their citizens safe online.
  • Renewed public confidence and trust in online companies and services.[8]

Duty of care

3.1 The government will establish a new statutory duty of care on relevant companies to take reasonable steps to keep their users safe and tackle illegal and harmful activity on their services.


3.3 This statutory duty of care will require companies to take reasonable steps to keep users safe, and prevent other persons coming to harm as a direct consequence of activity on their services. This broader application of the duty, beyond simply users of a particular service, recognises that in some cases the victims of harmful activity – victims of the sharing of non- consensual images, for example – may not themselves be users of the service where the harmful activity took place. This duty will apply to all of the harms included in the scope of the White Paper, as set out below.[9]

18. All companies in scope of the regulatory framework will need to be able to show that they are fulfilling their duty of care. Relevant terms and conditions will be required to be sufficiently clear and accessible, including to children and other vulnerable users. The regulator will assess how effectively these terms are enforced as part of any regulatory action.[10]


4.5 ... the duty of care model will reflect the diversity of organisations in scope, their capacities, and what is technically possible in terms of proactive measures, including for those providing ancillary services such as caching (the process of temporarily storing data in either a software or hardware ‘cache’) ...[11]

7.4 As indication of their compliance with their overarching duty of care to keep users safe, we envisage that, where relevant, companies in scope will:

  • Ensure their relevant terms and conditions meet standards set by the regulator and reflect the codes of practice as appropriate.
  • Enforce their own relevant terms and conditions effectively and consistently.
  • Prevent known terrorist or CSEA content being made available to users.
  • Take prompt, transparent and effective action following user reporting.
  • Support law enforcement investigations to bring criminals who break the law online to justice.
  • Direct users who have suffered harm to support.
  • Regularly review their efforts in tackling harm and adapt their internal processes to drive continuous improvement.[12]

Freedom of Expression

5.12 The regulator will also have an obligation to protect users’ rights online, particularly rights to privacy and freedom of expression. It will ensure that the new regulatory requirements do not lead to a disproportionately risk averse response from companies that unduly limits freedom of expression, including by limiting participation in public debate. Its regulatory action will be required to be fair, reasonable and transparent.[13]

General Monitoring

3.12 The regulator will not compel companies to undertake general monitoring of all communications on their online services, as this would be a disproportionate burden on companies and would raise concerns about user privacy. The government believes that there is however, a strong case for mandating specific monitoring that targets where there is a threat to national security or the physical safety of children, such as CSEA and terrorism.

Terms and Conditions

18. Relevant terms and conditions will be required to be sufficiently clear and accessible, including to children and other vulnerable users. The regulator will assess how effectively these terms are enforced as part of any regulatory action.[14]


2.13 At present many online companies rely on using their terms and conditions as the basis by which to judge complaints. In practice however, companies’ terms and conditions are often difficult for users to understand, and safety policies are not consistent across different platforms, with take-down times, description of harms and reporting processes varying. A series of investigations have highlighted the risk of serious shortcomings in the training, working conditions and support provided for content moderators.


2.14 There is no mechanism to hold companies to account when they fail to tackle breaches. There is no formal, wide-reaching industry forum to improve coordination on terms and conditions. The absence of clear standards for what companies should do to tackle harms on their services makes it difficult for users to understand or uphold their rights.[15]

The Regulator

  • An independent regulator will implement, oversee and enforce the new regulatory framework. It will have sufficient resources and the right expertise and capability to perform its role effectively.
  • The regulator will also have broader responsibilities to promote education and awareness-raising about online safety, and to promote the development and adoption of safety technologies to tackle online harms.
  • The regulator will take a risk-based approach, prioritising action to tackle activity or content where there is the greatest evidence or threat of harm, or where children or other vulnerable users are at risk.
  • To support this, the regulator will undertake and commission research to improve the evidence base, working closely with UK Research and Innovation (UKRI) and other partners.
  • The regulator will take a proportionate approach, expecting companies to do what is reasonable, depending on the nature of the harm and the resources and technology available to them.
  • The regulator will have a legal duty to pay due regard to innovation, and to protect users’ rights online, being particularly mindful to not infringe privacy and freedom of expression.
  • The government is consulting on whether the regulator should be a new or existing body. The regulator will be funded by the industry in the medium term, and the government is exploring options such as fees, charges or an industry levy to put it on a sustainable footing.[16]

Capacity

5.20 The new regulator will require the capacity to understand how online technology and platforms operate, and collect, analyse and act upon the relevant data submitted by companies whose services are in scope. It will also require sufficient capacity to undertake research and horizon scanning to ensure the regulatory requirements keep pace with innovation and the emergence of new harms.[17]

Functions

5.2 The regulator’s functions will include:

  • Setting out what companies need to do to fulfil the duty of care, including through codes of practice.
  • Establishing a transparency, trust and accountability framework, backed by information-gathering powers, to assess companies’ compliance with the duty of care and their own relevant terms and conditions.
  • Providing support to start-ups and SMEs to help them fulfil their legal obligations in a proportionate and effective manner.
  • Overseeing the implementation of user redress mechanisms.
  • Taking prompt and effective enforcement action in the event of non-compliance.
  • Promoting education and awareness-raising about online safety to empower users to stay safe online.
  • Promoting the development and adoption of safety technologies to tackle online harms.
  • Undertaking and commissioning research to improve our understanding of online harms and their impacts on individuals and society.[18]

Risk-based approach

5.3 The government will require the regulator to adopt a risk-based approach, prioritising regulatory action to tackle harms that have the greatest impact on individuals or wider society. This will shape the development of codes of practice, monitoring and review of online harms, the regulator’s work with industry to develop technological solutions, and enforcement action.[19]

Proportionality

5.4 The regulator will focus on companies where there is the greatest risk of harm, based on factors such as the type of service – for example, services that enable adult users to contact children, services that have large user bases, and services that target or are popular with vulnerable groups of users. It will also use evidence of the actual incidence of harms on different services and the safety track record of different companies to prioritise its resources. The regulator will use its powers to conduct thematic reviews, undertake targeted horizon scanning and investigate specific issues to develop its understanding of the risk landscape.


5.6 ... When assessing compliance, the regulator will need to consider whether the harm was foreseeable, and therefore what is reasonable to expect a company to have done.[20]


5.7 The regulator will take account of the capacity of companies to meet regulatory requirements, including the reach of their platforms in terms of user-base and the severity of the harms. This proportionate approach will also be enshrined in the legislation by making clear that companies must do what is ‘reasonably practicable’...[21]

Box 26 - Regulation can impose a disproportionate burden on smaller companies. Badly designed regulation can stifle innovation by giving an advantage to large companies that can handle compliance more easily. We are determined that this regulatory framework should provide strong protection for our citizens while avoiding placing an impossible burden on smaller companies. We will take five key steps to achieve this: 1. A proportionate approach 2. A duty of innovation 3. Making compliance straightforward 4. Using technology 5. Minimising compliance costs[22]

3.4 A key element of the regulator’s approach will be the principle of proportionality. Companies will be required to take action proportionate to the severity and scale of the harm in question. The regulator will be required to assess the action of companies according to their size and resources, and the age of their users.


3.5 The regulatory approach will impose more specific and stringent requirements for those harms which are clearly illegal, than for those harms which may be legal but harmful, depending on the context.


3.6 Companies must fulfil their new legal duties. The regulator will set out how to do this in codes of practice. The codes will outline the systems, procedures, technologies and investment, including in staffing, training and support of human moderators, that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users.[23]

Powers to obtain information and require reports

5.13 The new regulator will take an evidence-based approach to regulatory activity. It will need to understand the potential impact of technological developments on the companies it regulates, as well as users’ experiences of harm. To support this, we expect that it will run a regular programme of user consultation, in-depth research projects, and horizon scanning activity. It will work with companies to ensure that academics have access to company data to undertake research, subject to suitable safeguards. This dynamic approach to evidence gathering will help the regulator to assess the changing nature of harms and the risks associated with them, and of the places and manner in which they manifest online.


5.14 The regulator will work closely with UKRI to ensure support for targeted research into online harms, and to develop the collective understanding of online harms and the evidence base, building on the work of the UKCIS Evidence Group. This will include working with relevant aspects of UKRI’s Digital Economy Theme – a partnership between the Engineering and Physical Sciences Research Council (EPSRC), the Arts and Humanities Research Council (AHRC), the Economic and Social Research Council (ESRC) and Innovate UK.[24]

3.17 To inform its reports and to guide its regulatory action, the regulator will have the power to require annual reports from companies covering the following areas:

  • Evidence of effective enforcement of the company’s own relevant terms and conditions, which should reflect guidance issued by the regulator in its codes of practice.
  • Processes that the company has in place for reporting illegal and harmful content and behaviour, the number of reports received and how many of those reports led to action.
  • Proactive use of technological tools, where appropriate, to identify, flag, block or remove illegal or harmful content.
  • Measures and safeguards in place to uphold and protect fundamental rights, ensuring decisions to remove content, block and/or delete accounts are well- founded, especially when automated tools are used and that users have an effective route of appeal.
  • Where relevant, evidence of cooperation with UK law enforcement and other relevant government agencies, regulatory bodies and public agencies.
  • Details of investment to support user education and awareness of online harms, including through collaboration with civil society, small and medium sized enterprises (SMEs) and other companies.[25]

3.20 As well as the power to require annual reports from companies, the regulator will have the power to require additional information from them to inform its oversight or enforcement activity, and to establish requirements to disclose information. It may also undertake thematic reviews of areas of concern, for example a review into the treatment of self-harm or suicide related content. The regulator will have the power to require companies to share research that they hold or have commissioned that shows that their activities may cause harm.[26]

Enforcement powers

  • The regulator will have a range of enforcement powers to take action against companies that fail to fulfil their duty of care. These will include the power to issue substantial fines.
  • We are consulting on which enforcement powers the regulator should have at its disposal, particularly to ensure a level playing field between companies that have a legal presence in the UK, and those who operate entirely from overseas.
  • In particular, we are consulting on powers that would enable the regulator to disrupt the business activities of a non-compliant company, measures to impose liability on individual members of senior management, and measures to block non-compliant services.
  • Companies will continue to be liable for the presence of illegal content or activity on their services, subject to existing protections.[27]

6.4 There are a number of enforcement powers that will be an essential part of the new regulator’s toolkit. These powers have been well tested in numerous other regulatory regimes. These core powers will include:

  • Issuing civil fines for proven failures in clearly defined circumstances. Civil fines can be tied into metrics such as annual turnover, volume of illegal material, volume of views of illegal material, and time taken to respond to the regulator.
  • Serving a notice to a company that is alleged to have breached standards, and setting a timeframe to respond with an action plan to rectify the issue.
  • Requiring additional information from the company regarding the alleged breach.
  • Publishing public notices about the proven failure of the company to comply with standards.


6.5 However, because of the particularly serious nature of some of the harms in scope, the global nature of many online services and the weak economic incentives for companies to change their behaviour, we think it is likely the regulator will need additional powers at its disposal. These measures will be more contentious because of either challenges around their technical feasibility or the potential impact on companies and the wider economy. We are therefore consulting on these options alongside this White Paper:

  • Disruption of business activities. In the event of extremely serious breaches, such as a company failing to take action to stop terrorist use of their services, it may be appropriate to force third party companies to withdraw any service they provide that directly or indirectly facilitates access to the services of the first company, such as search results, app stores, or links on social media posts. These measures would need to be compatible with the European Convention on Human Rights.
  • ISP blocking. Internet Service Provider (ISP) blocking of non-compliant websites or apps – essentially blocking companies’ platforms from being accessible in the UK – could be an enforcement option of last resort. This option would only be considered where a company has committed serious, repeated and egregious violations of the outcome requirements for illegal harms, failing to maintain basic standards after repeated warnings and notices of improvement. Deploying such an option would be a decision for the independent regulator alone. While we recognise that this would have technical limitations, it could have sufficient impact to act as a powerful deterrent. The British Board of Film Classification (BBFC) will have this power to address non-compliance when the requirements for age verification on online pornography sites come into force. We are exploring a range of options in this space, from a requirement on ISPs to block websites or apps following notification by the regulator, through to the regulator issuing a list of companies that have committed serious, repeated and egregious violations, which ISPs could choose to block on a voluntary basis.
  • Senior management liability. We are exploring possible options to create new liability for individual senior managers. This would mean certain individuals would be held personally accountable in the event of a major breach of the statutory duty of care. This could involve personal liability for civil fines, or could even extend to criminal liability. In financial services, the introduction of the Senior Managers & Certification Regime has driven a culture change in risk management in the sector. Another recent example of government action is establishing corporate offences of failure to prevent the criminal facilitation of tax evasion. Recent changes to the Privacy and Electronic Communications Regulations (PECR) provide powers to assign liability to a specific person or position within an organisation. However, this is as yet largely untested. There are a range of options for how this could be applied to companies in scope of the online harms framework, and a number of challenges, such as identifying which roles should be prescribed and whether this can be proportionate for small companies.[28]


6.9 The new regulatory regime will need to handle the global nature of both the digital economy and many of the companies in scope. The law will apply to companies that provide services to UK users. We will design the regulator’s powers to ensure that it can take action against companies without a legal presence in the UK, including blocking platforms from being accessible in the UK as a last resort. Where companies do not have a legal presence in the UK, close collaboration between government bodies, regulators and law enforcement overseas, in the EU and further afield, will be required.


6.10 We are also considering options for the regulator, in certain circumstances, to require companies which are based outside the UK to appoint a UK or EEA-based nominated representative...


6.11 It is vital that the regulator takes an international approach. Where similar regulators and legal systems are in place in other countries, the regulator will lead engagement with its international counterparts. Having these relationships will support the UK’s ability to put pressure on companies whose primary base is overseas.[29]

Role of Parliament

3.31 It will be important to ensure that Parliament is able to scrutinise the regulator’s work. Mechanisms for achieving this will depend in part on whether the regulator is a new or existing body but are likely to include, for example, a duty on the regulator to lay its annual report and audited accounts before Parliament. The regulator will also have a general responsibility to provide Parliament with information about its work, as requested.


3.32 In addition, we will consider what role Parliament should have in relation to the regulator’s codes of practice. Parliament’s role in relation to codes of practice and guidance issued by other regulators varies across different regulatory regimes, ranging from formal approval to no specific role. We will consider options for the role of Parliament as we develop these proposals in more detail. [30]

Challenging decisions of the regulator

6.13 Companies and others must have confidence that the regulator is acting fairly and within its powers. They will have the ability to seek judicial review of the regulator’s actions and decisions through the High Court. We will also seek views through the consultation about whether there should be another statutory mechanism of review, which would allow the use of a tribunal other than the High Court, and what bar should be set for appeals through this route.[31]

User Redress

3.26 ... To fulfil the new duty of care, we will expect companies, where appropriate, to have an effective and easy-to-access complaints function, allowing users to raise either concerns about specific pieces of harmful content or activity, or wider concerns that the company has breached its duty of care. Users should receive timely, clear and transparent responses to their complaints, and there must be an internal appeals function. The regulator will have oversight of these processes, including through transparency information about the volume and outcome of complaints, and the power to require improvements where necessary.


3.27 In addition to the internal appeals processes, we recognise that independent review or resolution mechanisms may be appropriate in some circumstances. This would increase the accountability of companies and help rebuild users’ trust. We are consulting on the following option:

  • Whether a provision should be made in legislation for designated bodies to bring ‘super complaints’ to the regulator for consideration, in specific and clearly evidenced circumstances. This could be an important safeguard in the user redress process and we are also consulting on when such complaints would be appropriate and most effective, and on the bodies or groups that may be empowered to bring them.


3.30 The regulator’s primary role in the user redress process will be to oversee the requirement on relevant companies to have appropriate and effective internal complaints processes, including consideration of whether there should be an appeals function in certain circumstances. The regulator would also determine any ‘super complaints’ process and designate bodies. We do not envisage a role for the regulator itself in determining disputes between individuals and companies, but where users raise concerns with the regulator, it will be able to use this information as part of its consideration of whether there may be systemic failings which justify enforcement action. We will also require the regulator to take the interests of users into consideration.[32]

Commentary

The rights of Internet users are not discussed in this paper.

  • Absence of discussion of free expression, except asserting it will be considered
  • Absence of discussion of due process and right to be heard, except for mention in passing of "appeals"
  • Absence of discussion of the tension resulting from laws that prohibit legal content and activity.
  • Absence of discussion of the need to resolve complaints that have multiple sides, eg poster, complainant, audience
  • Problems with scope

There is little recognition that the measure seeks to regulate Internet users' speech via platforms. Some of the consequences of such an approach are mentioned, such as centralising the power of platforms and negatively impacting innovation.

Regulation of Political Campaigning Online

Overall the report seems to endorse widening or strengthening the Electoral Commission's remit to regulate the online political landscape. This is problematic. For example:

  • The Disinformation Duty of Care ( III; 7.28) suggests that the transparency of political advertising would be increased to meet 'any requirements in electoral law'. Election law covers electoral spending so any transparency could only be approached from looking at campaign spend.
  • Ultimately this is not sufficient as one of the key benefits of spending money campaigning online is to be able to target your resources more efficiently. There needs to be a different metric rather than just campaign spend, and therefore a different instrument than just electoral law.

Trails for content

It was expected to cover

  • age verification for social media companies[33]
  • intermediary liability for online hosting companies[34]
  • social media transparency reports[35]
  • children's data[36]
  • incitement to crime on social media[37]
  • anonymity[38]
  • technology overuse/addiction[39]
  • online advertising[40]
  • safety-by-design guidelines[41]

See also

References

Sources

references

  1. Page 49
  2. Page 8
  3. Page 49
  4. Page 49
  5. Page 49
  6. Page 50
  7. Page 31
  8. Page 6
  9. Page 42
  10. Page 7
  11. Page 49-50
  12. Page 64
  13. Page 56
  14. Page 7
  15. Page 38
  16. Page 53
  17. Page 58
  18. Page 54
  19. Page 54
  20. Page 54
  21. Page 55
  22. Page 55-56
  23. Page 42
  24. Page 56
  25. Page 44
  26. Page 45
  27. Page 59
  28. Page 59-60
  29. Page 61
  30. Page 47
  31. Page 62
  32. Page 46
  33. Hansard 2018-11-05
  34. Hansard, 2018-10-29
  35. Hansard, 2018-10-29
  36. Hansard, 2018-10-26
  37. Hansard, 2018-09-11
  38. Hansard, 2018-10-23
  39. Hansard, 2018-10-23
  40. Hansard, 2018-10-29
  41. Hansard, 2018-12-04