Will Perrin and Lorna Woods have written a series of papers advocating for a social media 'duty of care' to form the key policy mechanism to result in more effective removal of content deemed to be harmful. This is in response to the government's Internet Safety Strategy and their Green Paper response.
Nevertheless the proposal has met with criticism, for instance from the Internet law authority Graham Smith.
- 1 Core proposal: Duty of care and regulator for social media providers
- 2 Cycle of harm reduction
- 3 Legal framework
- 4 Justification
- 5 Criticism
- 6 Commentary
- 7 References
- Social media service providers should each be seen as responsible for a public space, much as property owners or operators are in the physical world. In the physical world, Parliament has long imposed statutory duties of care upon property owners or occupiers in respect of people using their places, as well as on employers in respect of their employees. A duty of care is simple, broadly based and largely future-proof. It focusses on the objective and leaves the detail of the means to those best placed to come up with context-appropriate solutions – those who are subject to the duty of care. We suggest this model for the largest social media service providers – a duty of care in respect of their users, enforced in a risk-based manner by a regulator. The duty of care would not apply to online services with their own detailed rules such as the traditional media.
- A statutory duty of care to mitigate against certain harms be imposed on social media service providers with over 1,000,000 users/members/viewers in the UK in respect of their users/members. These categories of harm are to be specified in statute at a high level of generality. Those under a duty of care would be expected to identify the level of specified harms occurring through set-up and/or use of their respective platforms and take steps to reduce the level of harm, as set out below. This process would be monitored by an independent regulator. The regulator would be appointed and funded by a share of the revenue from the tax on internet company revenues that the government seems about to introduce.
- Central to the duty of care is the idea of risk. If a service provider targets or is used by a vulnerable group of users (e.g. children), its duty of care is greater and it should have more safeguard mechanisms in place than a service which is, for example, aimed at adults and has community rules agreed by the users themselves (not imposed as part of ToS by the provider) to allow robust or even aggressive communications.
Cycle of harm reduction
- We envisage the harm reduction cycle to look something like this:
- Each service provider works with the regulator, consulting civil society, to survey the extent and occurrence of harms, as set out by Parliament, in respect of the services provided by that provider;
- Each service provider then produces and implements a plan to reduce the harms, having consulted the regulator and civil society;
- Periodically, the harms are re-measured, the effectiveness of the plan assessed and, if necessary, further changes to company practices and to tools available to users introduced;
- after a period the harms are measured again as above, new plans are produced and the cycle repeats;
- progress towards harm reduction is monitored by the regulator, which may take regulatory action if progress is in the regulator’s view insufficient.
- Action that a provider could take is not just about take down notices but could include:
- measures to empower users, for example pre-emptive blocking tools in the hands of the user; setting up sub-groups that have different toleration of certain types of language
- effective complaints mechanisms both in respect of other users but also the company itself
- transparency measures so that it is possible to see the number of complaints, the response, the mechanism by which the complaint was processed (human or automated) and the reasoning
- review systems of company processes that assess them for nudging users to certain sorts of behaviours.
The system would be backed by sanctions. "The regulator would have a range of sanctions from adverse behaviour notices though to administrative fines on the scale of those found in the GDPR. Individuals may be able to bring court action but we emphasise that this should only be in respect of systemic failures and not as a substitute for a civil action in relation to specific items of content."
- In our opinion this is compatible with EU law, in particular the e-Commerce directive. The immunity provisions relate to liability for the content of others and do not absolve providers from any duties of care.
- The preventive element of duty of care will reduce the suffering of victims. It may also prevent behaviours reaching a criminal threshold.
- A risk-managed approach only targeting the largest providers preserves freedom of speech. We envisage that platforms may take different approaches, and that a market could arise in which platforms develop aimed at particular groups. Content or speech patterns that are not acceptable on one platform may find a home elsewhere.
- Harms represent external costs generated by the production of the social media service providers’ products. The duty of care, by requiring action to prevent harms internalises these costs to provider. This makes the market function more efficiently for society on the polluter pays principle and ultimately drives a more effective market which also benefits providers.
Comparison of legal regimes
- … We surveyed regulatory regimes for communications, the digital economy, health and safety and the environment.
- There are many similarities between the regimes we surveyed. One key element of many of the regulators’ approach is that changes in policy take place in a transparent manner and after consultation with a range of stakeholders. Further, all have some form of oversight and enforcement – including criminal penalties- and the regulators responsible are independent from both Parliament and industry. Breach of statutory duty may also lead to civil action. These matters of standards and of redress are not left purely to the industry.
- While the telecommunications model may seem an appropriate model give the telecommunications sector’s closeness to social media, it may be that it is not the most appropriate model for four reasons:
- the telecommunications regime gives the regulator the power of stopping the operator from providing the service itself, and not just problematic elements in relation to the service - we question whether this is appropriate in the light of freedom of speech concerns;
- the telecommunications regime specifies the conditions with which operators must comply, albeit at a level of some generality – we feel that this is too ‘top-down’ for a fast moving sector and that allowing operators to make their own assessment of how to tackle risks means that solutions may more easily keep up with change, as well as be appropriate to the service;
- a risk-based approach could also allow the platforms to differentiate between different types of audience – and perhaps to compete on that basis; and
- the telecommunications regime is specific to the telecommunications context, the data and workplace regimes are designed to cover the risk entailed from broader swathes of general activity.
- Although the models have points of commonality, particularly in the approach of setting high level goals and then relying on the operators to make their own decisions how best to achieve that - there are perhaps aspects from individual regimes that are worth highlighting:
- the data protection and HSE regime highlight that there may be differing risks with two consequences;
- that measures should be proportionate to those risks; and
- that in areas of greater risk there may be greater oversight.
- The telecoms regime emphasises the importance of transparent complaints mechanisms – this is against the operator (and not just other users);
- the environmental regime introduces the ideas of prevention and prior mitigation, as well as the possibility for those under a duty to be liable for the activities of others (eg in the case of fly-tipping by a contractor); and
- the Digital Economy Act has mechanisms in relation to effective sanctions when the operator may lie outside the UK’s jurisdiction.
Duty of care
- The idea of a “duty of care” is straightforward in principle. A person (including companies) under a duty of care must take care in relation to a particular activity as it affects particular people or things. If that person does not take care and someone comes to harm as a result then there are legal consequences. A duty of care does not require a perfect record – the question is whether sufficient care has been taken. A duty of care can arise in common law (in the courts) or, as our discussion of regulatory models above shows, in statute (set out in a law). It is this latter statutory duty of care we envisage. For statutory duties of care, as we set out above, while the basic mechanism may be the same, the details in each statutory scheme may differ – for example the level of care to be exhibited, the types of harm to be avoided and the defences available in case of breach of duty.
Comparison with public spaces
- Many commentators have sought an analogy for social media services as a guide for the best route to regulation. A common comparison is that social media services are “like a publisher”. In our view the main analogy for social networks lies outside the digital realm. When considering harm reduction, social media networks should be seen as a public place – like an office, bar, or theme park. Hundreds of millions of people go to social networks owned by companies to do a vast range of different things. In our view, they should be protected from harm when they do so.
- The law has proven very good at this type of protection in the physical realm. Workspaces, public spaces, even houses, in the UK owned or supplied by companies have to be safe for the people who use them. The law imposes a “duty of care” on the owners of those spaces. The company must take reasonable measures to prevent harm. While the company has freedom to adopt its own approach, the issue of what is ‘reasonable’ is subject to the oversight of a regulator, with recourse to the courts in case of dispute. If harm does happen the victim may have rights of redress in addition to any enforcement action that a regulator may take action against the company. We emphasise that this should only be in respect of systemic failures and not as a substitute for a civil action in relation to specific items of content. By making companies invest in safety the market works better as the company bears the full costs of its actions, rather than getting an implicit subsidy when society bears the costs.
Key harms targeted in the approach
- We propose setting out the key harms that qualifying companies have to consider under the duty of care, based in part on the UK Government’s Internet Safety Green Paper. We list here some areas that are already a criminal offence –the duty of care aims to prevent an offence happening and so requires social media service providers to take action before activity reaches the level at which it would become an offence.
- Harmful threats – statement of an intention to cause pain, injury, damage or other hostile action such as intimidation. Psychological harassment, threats of a sexual nature, threats to kill, racial or religious threats known as hate crime. Hostility or prejudice based on a person’s race, religion, sexual orientation, disability or transgender identity. We would extend the understanding of “hate” to include misogyny.
- Economic harm – financial misconduct, intellectual property abuse,
- Harms to national security – violent extremism, terrorism, state sponsored cyber warfare
- Emotional harm – preventing emotional harm suffered by users such that it does not build up to the criminal threshold of a recognised psychiatric injury. For instance through aggregated abuse of one person by many others in a way that would not happen in the physical world (see Stannard on emotional harm below a criminal threshold). This includes harm to vulnerable people – in respect of suicide, anorexia, mental illness etc.
- Harm to young people – bullying, aggression, hate, sexual harassment and communications, exposure to harmful or disturbing content, grooming, child abuse (See UKCCIS Literature Review)
- Harms to justice and democracy – prevent intimidation of people taking part in the political process beyond robust debate, protecting the criminal and trial process (see concerns expressed by the Attorney General and the Committee on Standards in Public Life)
Graham Smith examined the proposal and sets out the limits of the analogy between offline and online harms, and the applicability of a duty of care:
Words and images may cause distress. It may be said that they can cause psychiatric harm. But even in the two-way scenario of one person injuring another, there is argument over the proper boundaries of recoverable psychiatric damage by those affected, directly or indirectly. Only in the case of intentional infliction of severe distress can pure psychiatric damage be recovered.
The difficulties are compounded in the three-way scenario: a duty of care on a platform to prevent or reduce the risk of one visitor using words that cause psychiatric damage or emotional harm to another visitor. Such a duty involves predicting the potential psychological effect of words on unknown persons. The obligation would be of a quite different kind from the duty on the occupier of a football ground to take care to repair dilapidated terracing, with a known risk of personal injury by fans prising up lumps of concrete and using them as missiles.
It might be countered that the platform would have only to consider whether the risk of psychological or emotional harm exceeded a threshold. But the lower the threshold, the greater the likelihood of collateral damage by suppression of legitimate speech. A regime intended to internalise a negative externality then propagates a different negative externality created by the duty of care of regime itself. This is an inevitable risk of extrapolating safety-related duties of care to speech-related harms.
The proposal for a duty of care has some merits. It is an attempt to find an objective measure for action by which a regulator or law might seek to judge the activities of online providers. It moves the debate away from the arid territory of whether social media providers are publishers, and whether the limited protections from liability they have ought to be stripped or further limited. As such it can be considered an attempt to depoliticise the debate and move it into a more objective discussion about actions that ought to be taken. For all these reasons, it is at first sight an attractive idea.
However, the proposal may not be the silver bullet to deliver sane Internet regulation that many hope it to be. The duty of care model is claimed to be "simple in principle" but in fact the concept hides a great deal of complexity in the relationships between platforms and their users and in establishing what harm might mean in this context.
Its success hinges on the ability of operators to identify clear risks of clear harms that can be dealt with by proportionate mitigations, that do not themselves deliver further harms, such as the unwanted removal of content that does not pose a risk.
The proposal includes some consideration of the risks to differing groups, and a commitment to proportionate responses. However, there remains a worrying presumption inherent within a 'duty of care' that action to limit risks should be the paramount policy driver. This is troubling given that the material largely deals with (the manner of) the expression of views and opinions.
The duty of care approach itself is not so simple to transfer across from other legal regimes. There are a number of distinctions that can be discerned, for instance duties of care typically assume that:
- There is a direct and clear relationship between the responsible party and the person placed at risk;
- The nature of the harm to be avoided is clear;
- The harm can be managed by the responsible party through their own prior actions;
- It is uncontroversial that 'harm event' should be avoided, even if there is controversy about who should manage that risk;
- Related to this, nobody is likely to defend their right to act and / or contest whether harm has taken place, or assert that it is legitimate for them to engage in the action deemed to harm another;
- The harm can be clearly established when it takes place, for instance a physical injury or financial cost;
- They do not usually seek to manage hostile actors who are deliberately trying to play out the activity sought to be targeted;
- Typically, the responsible party is not managing relations between two or more third parties.
- Duties of care do not typically impact free expression, except in certain very close relationships, such as employment welfare.
However, the range of risks identified by Perrin and Woods are only sometimes close to these kinds of assumptions. In their list we can identify two kinds of activity:
- Hostile, criminal and clearly unwanted activity, such as spam, fraud, child abuse images
- Activities where the harms are harder to establish, measure and adjudicate, or may be better characterised as wrongdoing, especially in the case of a specific incident. Free expression questions will more frequently arise, including copyright infringement, hate speech, racism, misogyny, bullying or harassing individuals
In the first case, platforms in practice will already act as if they had a duty of care, because it is extremely disruptive to their businesses for criminal activity to target their users.
In the second case, platform will usually ban many kinds of behaviour and content. However there is a great deal of controversy about where lines are drawn and how effectively material is removed, and whether platforms have proactive duties to remove it. Decisions are likely to be controversial and may be contested.
In all cases, the multiple relationships are a particular challenge, as different actors feel their rights may be abrogated by taking action or failing to take action. Arguably, there is a "duty of care" owed to each party to resolve their complaint fairly. This is a much more concrete and discernable duty as the ability of a platform to deal with a complaint, process it fairly with regard to each party and offer further review is much more clearly their responsibility than the initial action taken by a user. Indeed, the policy drivers behind the governments' Green Paper include that the processes themselves are where the lack of confidence appears to lie, whether that is confidence that complaints may bead to material being removed, or put back.
Free expression balance
The proposal claims to sidestep free expression concerns, but does not at any point explain why or how this may be the case.
It states that "A risk-managed approach only targeting the largest providers preserves freedom of speech"; and appears to envisage that displacement of speech is a reasonable outcome ("Content or speech patterns that are not acceptable on one platform may find a home elsewhere.").
However this is not reasonable: the audience in any give platform is different. The audience and the publisher have a right to receive and impart information. Thus any restriction imposed through a duty of care must be assessed against its own impacts, and not dismissed on the basis that the expression can go elsewhere.
It is notable in the regulatory comparison that is made that the models chosen for comparison were mostly not ones with particular speech concerns. A comparison between broadcast and press regulation could be instructive for instance. Broadcast regulation is justified because of the power of a very small number of media owners controlling the opinions given to others. The press however are regarded as sufficiently diverse for the views themselves not to be regulated, and for governments to keep clear of regulation so that the press is not perceived as controlled by government.
It is unclear why governments should generally be committed to keeping out of press regulation, opting instead for independent self-regulation because of the potential impact on free expression, yet should intervene very directly where the speech of millions of citizens is concerned.
As with other proposals for regulation of social media, the success or failure of a duty of care from a free expression perspective would at least rely on:
- A very tight relationship between the harm to be limited and positive evidence that the activity to be targeted genuinely and consistently creates a harm; and
- Strong mitigations to ensure accuracy in the material targeted and measures to ensure that mistakes are found and dealt with that go beyond the mere presence of an appeals process
- Avoiding prior restraint
- Maintaining public confidence that a state regulator was not in fact controlling and censoring opinion online
- Other procedural requirements to ensure that platforms deal with and between their customers fairly when engaged in disputes
It remains unclear that harms to specific individuals, for instance people in vulnerable groups, can necessarily result in requirements to limit the speech of others, where the speech would not normally be harmful.
A duty of care approach could easily be extremely restrictive if action was required because of the potential to harm, or if the threshold for evidence of harm is set low.
Conceptually this risk is exacerbated from the fact that risk and harm are being judged by a regulator in this proposal from an aggregate, collective view, while the actual risk and harm that may reside in a particular event is not necessarily in view as far as the regulator is concerned. In fact, individual events are specifically placed out of view, so that "systemic failures" to "reduce risk" are the only concern of the law.
As discussed above, the relationship between evidence of harm and action is hard to establish, and in practice proxies such as social offence or unacceptability are likely to be the real drivers. This could easily be cloaked by the policy process, which is likely to suffer the general biases of society at large. Recent policy processes have been only too ready accept links between content and harms on flimsy grounds, or to exaggerate those harms, for instance in copyright debates, and child protection debates.
It is hard to imagine a regulator taking a stance defending the availability of content that society deems unacceptable even when there is no substantial evidential base to link content to actual harms. If the regulator were to limit duties of care to those areas where high risks of harm in normal circumstances really could be objectively established, then the policy could be perceived as a failure.
Furthermore, the full range of issues that ought to be the subject of regulation (particularly self-regulation) are not captured by a notion of harm. Many kinds of content are restricted by convention (kinds of content a platform wishes to be posted) or by the bounds of civility. These relationships and rules need to be subjected to independent external oversight, although this should be independent of government as well as the platforms.
There is little discussion in the papers forming this proposal about the mitigations needed to protect free expression, in common with other papers on Internet regulation. We note that users are often very worried about making appeals and avoid doing so, because they do not understand or fear the consequences. Furthermore there is often no real incentive against over-reaction or inaccuracy in most systems of content classification. These points need considerable discussion, whatever model of regulation may be chosen.
- Can Society Rein In Social Media? Will Perrin and Professor Lorna Woods, carnegieuktrust.org.uk 27 February 2018
- Harm Reduction In Social Media – A Proposal, Will Perrin and Professor Lorna Woods, carnegieuktrust.org.uk 22 March 2018
- Harm Reduction In Social Media – What Can We Learn From Other Models Of Regulation? Will Perrin and Professor Lorna Woods, carnegieuktrust.org.uk 4 May 2018
- Reducing harm in social media through a duty of care Will Perrin and Professor Lorna Woods, carnegieuktrust.org.uk 8 May 2018
- Which social media services should be regulated for harm reduction? Will Perrin and Professor Lorna Woods, carnegieuktrust.org.uk 8 May 2018
- How Would A Social Media Harm Regulator Work? Will Perrin and Professor Lorna Woods, carnegieuktrust.org.uk 10 May 2018
- Who should regulate to reduce harm in social media services? Will Perrin and Professor Lorna Woods, carnegieuktrust.org.uk 10 May 2018
- Professor of Internet Law Lorna Woods, University of Essex and William Perrin – written evidence (IRN0047) The Internet: to regulate or not to regulate? Communications Committee, House of Lords 30 May 2018
- Take care with that social media duty of care cyberleagle.com 19 October 2018
- HoL evidence
- HoL evidence
- HoL evidence, para 10; see also para 32 "People would have rights to sue eligible social media service providers under the duty of care; for the avoidance of doubt, a successful claim would have to show a systemic failing rather than be deployed in case of an isolated instance of content. But, given the huge power of most social media service companies relative to an individual we would also appoint a regulator. The regulator would ensure that companies have measurable, transparent, effective processes in place to reduce harm, so as to help avoid the need for individuals to take action in the first place. The regulator would have powers of sanction if they did not."
- HoL evidence
- HoL evidence, paras 20-21
- HoL evidence, para 24
- HoL evidence, para 25-26
- HoL evidence, para 30
- Smith 2018