Keeping Consumers Safe Online

Keeping Consumers Safe Online is a report by Communications Chambers and commissioned by Sky.[1]

Key recommendations

Government should include an accountability framework for online content intermediaries in the planned White Paper on online harms and safety. This should provide for:

  • a Code of Practice that describes desired standards across a range of content types, and procedural expectations of platforms, including Transparency Reports; and
  • An oversight body responsible for developing the Code and with the means to deploy incentives and sanctions to encourage the Code’s take-up.

Second, intermediaries should work together to assess the potential for a co-regulatory body to provide independent oversight of intermediaries’ content policies, with buy-in from most platforms with significant numbers of UK users. It would be sensible for industry to consider whether such a body could operate across multiple jurisdictions, and consequently help address regulatory concerns in other countries as well as the UK.

Finally, Government should consider options for a statutory oversight body, in case the industry option does not make sufficient progress within a reasonable time period.[2]

Purpose of regulation

The overarching policy goal is to establish new norms, about acceptable behaviours online, the rights and responsibilities of different users, and the role of intermediaries in balancing those rights. Regulation is needed, if markets face systemic problems and social costs that are not fully internalised.

The arguments presented here suggest that such problems are inherent in online content markets. Regulation should be considered, with the specific purpose of:

  • clarifying what consumers should be able to expect from intermediaries, in their handling of harmful and illegal content;
  • ensuring intermediaries’ governance of online content is proportionate andaccountable, and takes a fair and responsible approach to balancing rights;
  • in pursuing these objectives, recognising differences between intermediaries of varying size and different business models, and the need for regulatory certainty and an outcomes-based approach.[3]

Ruling out independent self-regulation

It seems unlikely that a self-regulatory approach will now be seen as a sufficient response to these challenges; any industry-led response risks lacking trust and legitimacy. As Ofcom has noted in another context:

“in the absence of alignment between the interests of the industry and the public interest, self-regulatory regimes are unlikely to prove effective when confronted by circumstances which present a tension between the public interest and the corporate interests of industry players.”

Even if such regimes can be established, it may be difficult to assess their effectiveness without, at minimum, independent scrutiny and reporting.[4]

Justification

The report outlines five reasons that platforms have significant impact on our lives:

First, information goods play a special role in society, supporting democratic engagement, promoting the transmission of ideas, building community identities and enabling economic empowerment. Fundamental rights must be balanced: freedom of expression, respect for privacy, dignity and non-discrimination, protection of intellectual property and the right to conduct business. Both positive and negative externalities in content markets are policy concerns, and a range of legal and regulatory requirements address them in other areas. …

Second, the cost or disadvantage of exiting certain platforms may reduce choice and effectively lock some users in. In theory, if a user does not like Facebook’s content policy or trolling on Twitter, they can go elsewhere, but they may find few of their friends or intended recipients there.

Broadcasting regulation was historically posited in part on the limited number of channels being available to audiences. Online, bottlenecks have not disappeared, but they have shifted to the discovery of content, making the rules by which intermediaries control discovery a matter of legitimate public interest.

Third, the commercially efficient response to harmful or illegal content is not removal, but personalisation. Personalisation tools and algorithms, in theory, allow intermediaries to ensure each individual gets the particular content that maximises their value from the service. But the greater the degree of personalisation, the more opportunities exist for providers and consumers of harmful content to find each other, often largely unseen by other platform users.

Fourth, notwithstanding these incentives, many intermediaries already actively regulate online content, including to meet policy goals. But the impact of this action is unclear, and accountability is limited. Transparency standards vary, making it hard to compare effectiveness across the industry. …

Finally, and linked to all the preceding points, intermediaries may need external oversight to secure legitimacy and consumer trust. Doteveryone’s research found that consumers are confused about what rules, if any, govern online services, and who to go to for advice or recourse. It also shows that many people doubt the Internet’s positive impact on society. Edelman’s Trust Barometer found increased doubt that technology companies are adequately transparent in 2018, and that while overall trust in technology remains strong, it is declining in many countries, including trust in platforms.[5]

Elsewhere the report says that the effectiveness of platforms' measures are not known:

More importantly, there is currently no way of assessing the impact and effectiveness of these activities, either with respect to takedown of illegal material or inadvertent blocking of legal content. Evaluations are generally conducted by intermediaries themselves, who choose what to measure and disclose. While the many transparency reports provided by the likes of Google, Facebook, Twitter and others are useful, they do not represent a comprehensive assessment of the impact of their content governance activities.[6]

Models of accountability: outcomes versus procedure

The report rules out regulation of specific content types, as for broadcast content. Instead, it considers two kinds of regulation, based on outcomes and procedure:

outcomes-based accountability may not always be achievable, for example where desired outcomes are impossible to define or measure, or where there are irreconcilable disputes about the appropriate balance between rights.

In such cases procedural accountability provides another route to legitimacy, in which intermediaries are judged by whether they used appropriate processes to reach a decision.

Procedural accountability requires a definition of good governance standards and the means to assess whether intermediaries’ policies meet those standards.[7]

Such standards, whether outcomes focused or procedural, would be defined by the Code.

Procedural standards

Procedural expectations would be based on familiar principles of good governance, and might include:

  • Proportionality – intermediaries should only block, remove or suppress content that is demonstrably harmful, illegal, or otherwise contrary to the Code or to their own terms and conditions;
  • Evaluation – intermediaries should specify objectives, measure and disclose the impact of their policies and decisions, and make commitments to improve performance over time;
  • Transparency – intermediaries’ content policies should be prominently available in a user-friendly form, and their impacts disclosed in transparency reports, where appropriate using industry-standard measures of success, which the oversight body could use to publish a consolidated assessment. Disclosure of impacts is important both to provide public confidence and remove information asymmetries between platforms and the oversight body;
  • Accessibility – intermediaries should make it easy for users to notify infringing content, give feedback on policies and processes, and access straightforward and quick complaint and appeals processes.

As with content standards, intermediaries could decide what specific procedures to adopt, recognising that these may differ widely between intermediaries. [8]

Proposed model: Code and oversight body

In this model, responsibility for actual content regulation – policy development, notification and appeals, use of automated detection tools, human moderation – continues to sit with intermediaries themselves, who are best placed to govern platforms in users’ interests.

The purpose of these proposals is to provide better oversight of that activity, and thereby replace ‘regulation by outrage’ with a more effective and proportionate approach.

All stakeholders could benefit from such a model, including intermediaries, who would have greater clarity about what is expected of them, the legitimacy that comes from external scrutiny and validation, and defence against unreasonable or unevidenced requests.[9]

Oversight body

The oversight body could be industry-led, if intermediaries can form an independent organisation with industry and Government support, able to make binding decisions, with a backstop regulator fulfilling a role in this co- regulatory model. The Advertising Standards Authority offers a precedent.

Or oversight could be provided by a statutory body (either an existing institution such as Ofcom, or a new body), in which case it should be funded by industry, as Ofcom is today.[10]

Code

The report considers the governments draft Code of Practice as too detailed and specific to be sufficiently flexible. Instead it proposes:

An alternative approach would be for Government to empower a regulator to provide for a more comprehensive, but higher-level Code of Practice, defining both broad content standards and procedural principles, but with fewer specific requirements[11]

A comprehensive Code would cover a wide range of content types:

  • Illegal material, including for example extremist content, hate speech, prohibited images of children, false advertising and intellectual property infringement;
  • Legal but harmful material, which should be defined, as precisely as possible, and included in the Code on the basis of an independent materiality assessment showing substantial evidence of harm on intermediary platforms. Possible examples include cyberbullying, misogynistic content, pornography and advertising placed in proximity to unsafe material; and
  • Content that meets positive policy goals (such as social inclusion, diversity of news, or provision of ‘trustworthy’ news).

Given risks of state interference in content, the Code should avoid highly prescriptive rules, especially about content that is not illegal. Intermediaries should have discretion to interpret broad principles as they apply to their particular platforms and users, whose expectations are likely to vary by platform.[12]

In summary:

The envisaged Code would be broad and flexible enough to adapt to new concerns and platforms. Its requirements would be proportionate to evidence of harm, with the priority on illegal and seriously harmful content, and lower expectations for legal-but- harmful material. It would also differentiate on size, with reduced or no requirements for smaller platforms. The baseline requirements of intermediaries above a de minimis size would be to notify the oversight body, contribute to its costs of operation, and provide information or carry out a harm assessment in response to a specific, evidence-based and reasonable request.[13]


Incentives and sanctions

The report outlines incentives and sanctions to encourage compliance:

The oversight body might offer incentives to encourage compliance: accreditation, kitemarks, beneficial rights (e.g. to access adjudication or arbitration mechanisms). These incentives should be developed with input from intermediaries.

Should the incentives be insufficient, the oversight body should also have sanctions available to it, potentially including the ability to: issue warnings; impose fines; provide notices to third parties who provide services to intermediaries (e.g. payment providers or advertisers); and, in extreme cases, involving repeated failures to comply, the power to request ISPs to block services.[14]

Commentary

Aspects of the report are thoughtful and welcome. The emphasis on moving away from "regulation by outrage" and towards measures that are more balanced and require evidence is welcome. The paper sets out the goals of regulation in clear terms. It is one of the few papers produced to place free expression up front in the analysis of issues that need to be dealt with.

The proposal does not go into much detail about the kinds of harms that might be dealt with, why they need dealing with and what is currently inadequate about platforms' own efforts, which the report says are substantial. Rather, it makes the case that these efforts should be accountable, setting out a number of factors that make platforms role in regulating speech very significant.

There is some paradox created between the justifications for action, which centre on accountability and the impacts on society and free expression of their decisions; and the impact of the proposal, which would be likely to increase removal of legal material.

Like other proposals, the framework places accountability's focus on content removal, especially of categories of "legal but harmful" content, without explaining what these are, or why they should remain legal but nevertheless be acted upon and removed.

Evidence of harm in these areas is notoriously hard to create. It is usually very partial, and based on notions of social acceptability rather than clear demonstrations of harm. Many of the problems that people want combated are primarily about behavioural norms.

Success of this approach from a free expression perspective would rely on two factors:

  1. A strong relationship between standard setting and prior evidence of harms; and
  2. Very robust mitigations for free expression harms that go well beyond creating appeals processes

We are doubtful that a strong relationship between evidence of harms and action is likely to be achieved, and in many cases it may not be the real motivation behind desire for action. Many restrictions that platforms make have no direct relationship with harm: rather, they aim at civility of discourse, or achieving a friendly or comfortable environment. A dialogue about these measures ought to form part of the work of an independent regulator, but seems deeply inappropriate for a government body to be involving itself with.

As with other papers, there remains the paradox that government regulatory models seek to limit the circulation of material that is legal, and provide a mechanism to extend this according to regulatory desire, that may not be well founded.

The paper rejects industry (or independent) self-regulation as likely to fail without exploring this in detail. Elsewhere, the paper shows that there is alignment between industry and the goals of any potential regulator, including establishing trust and transparency. The paper explains that self-regulation is likely to fail when interests of industry and society are not aligned. It is not clear that this alignment does not exist. Indeed, the capacity of industry to adjust in recent years suggests that it may well exist. Furthermore, the paper acknowledges the need to internationalise the efforts the UK makes, in which case the proposed end result would be a hybrid of independent self-regulation with some legislative requirements in some jurisdictions including the UK. Some analysis of whether independent self-regulation can be achieved needs to be made before moving to direct government regulation.

References

  1. Keeping Consumers Safe Online July 2018
  2. Ibid, p33
  3. Ibid, pp13-14
  4. Ibid, p16
  5. Ibid, pp12-13
  6. Ibid, p16
  7. Ibid, p21
  8. Ibid, p23
  9. Ibid, p7
  10. Ibid, p7
  11. Ibid, p7
  12. Ibid, p7
  13. Ibid, p7
  14. Ibid, p26