Government Response to the Internet Safety Strategy Green Paper

The Government Response to the Internet Safety Strategy Green Paper is the last summary of government policy for regulation of social media.[1] The reponse outlined a commitment to proceed with an Online Harms white paper, originally scheduled for September 2018.

Key proposals

The response outlines

plans for a social media code of practice and transparency reporting as part of our Digital Charter. The statutory code of practice provides guidance to social media providers on appropriate reporting mechanisms and moderation processes to tackle abusive content. By setting out clear standards for industry, we will make sure there is improved support for users online, and that more companies are taking consistent action to tackle abuse.

Transparency reports will provide data on the amount of harmful content being reported to platforms in the UK and information on how these reports are dealt with, including what mechanisms they have in place to protect users, for example, around their mental health and wellbeing. These reports will help us understand the extent of online harms and how effectively companies are tackling breaches in their terms and conditions.[2]

A draft social media Code of Practice is included in Annex B and a reporting standard at Annex C reproduced on this Wiki.

Change in approach

The Green Paper did not include references to legislative requirements of the kind proposed in the response. This is referenced obliquely:

Ensuring people's safety online is a fundamental element of this thriving ecosystem. We need to complement internet freedoms and innovation with safety and security to build trust in new technologies. Cyberbullying and intimidating behaviour online, which can have negative impacts on mental health and wellbeing, particularly among children, is now all too commonplace. Despite a range of voluntary initiatives, good work by a range of charities and technological innovations, online abuse remains an issue for millions of citizens. Therefore we are taking further steps to tackle this behaviour, and ensure that offline rules apply online too.

The Internet Safety Strategy Green Paper, which was published in October 2017, set out our proposals relating to tackling unacceptable behaviour and content online. Since then, the use of the Internet to spread disinformation or ‘fake news’, the dangers of using AI to manipulate public opinion at scale, the mass misuse of personal data and the potential for data to be used for unethical or harmful purposes, have all gained prominence as serious and real problems - demonstrating the importance of a comprehensive strategic approach to improve online safety and restore citizens’ confidence in technology.[3]

The consultation's terms of reference are restated in the introduction:

In the Green Paper we proposed a set of principles to underpin our approach:

  1. . What is unacceptable offline should be unacceptable online;
  2. . All users should be empowered to manage online risks and stay safe; and
  3. . Technology companies have a responsibility to their users, and for the content they host.

In the accompanying consultation, we asked people to tell us what they thought about our proposed initiatives, and how Government can work collaboratively with a wide range of stakeholders to tackle online harms and promote safer online communities. We particularly asked for views on:

  1. . Supporting parents and carers
  2. . A social media code of practice
  3. . Transparency reporting for social media companies
  4. . A social media levy
  5. . Possible technological solutions
  6. . Developing children’s literacy
  7. . Adult experience of online abuse
  8. . Concerns around online dating.[4]

The social media code of practice was intended to be a voluntary measure in the Green Paper:

A key part of this will be issuing the voluntary code of practice, required by the Digital Economy Act 2017. We are consulting on what this will look like, with an aim of publishing the code of practice in 2018. The government will also consider the recommendations of the Committee on Standards in Public Life’s review on the intimidation of Parliamentary candidates, which in due course may make recommendations on how online abuse should be tackled.

The code of practice will seek to ensure that providers offer adequate online safety policies, as laid out in the Digital Economy Act 2017, introduce minimum standards and metrics and ensure regular review and monitoring. The Act requires that the code addresses conduct that involves bullying or insulting an individual online, or other behaviour likely to intimidate or humiliate the individual (section 103(3)). This will be an important way for us to tackle pervasive issues such as trolling. The code will not cover unlawful content which the legal framework already addresses. We intend that the code of practice will be developed following this consultation.[5]

Evidence summary

Survey results and consultation exercises

References to the survey conducted as part of the Green Paper consultation are made throughout the document.

528 individuals and 62 organisations responded to our survey. Where we have provided consultation result figures in this document, we have highlighted whether the results relate to responses from individuals or organisations.[6]

It is unclear from the document whether the full results are contained within the Green Paper response.

Additional consultation exercises were conducted with school children.[7]

Organisational responses

A number of responses, particularly those from charities, suggested that Government should make the proposed code of practice legally binding, underpinned by an independent regulator and backed up by a sanctions regime. The NSPCC’s response said that successive voluntary codes of conduct and guidance adopted by industry in the past have not delivered significant, long-lasting impact. The Children's Charities' Coalition on Internet Safety (CHIS) said that measures such as the code of practice must be linked to a regulator with clearly defined legal powers to describe minimum standards, and to enforce those standards using a range of tools including an ability to levy substantial fines.

Other charities, including 5Rights, expressed concern over the existing self-regulatory approach. The British Computing Society, in their consultation response, said that the potential for further formal legislation was a reasonable way of encouraging industry buy-in and collaboration between industry and Government. The Children’s Media Foundation pointed to the number of underage children on platforms as proof that voluntary self-regulation does not work.[8]

Code of practice and transparency reporting

Google, Twitter and Facebook stated that they would work with Government to establish the social media code of practice and transparency reporting. Google suggested that the code of must not reduce the incentive for platforms to make their own sites as safe as possible in the most effective ways possible which can be very relevant and specific to their sites. Facebook referenced the priority they give to a number of existing self-regulatory initiatives, including the European Commission's Alliance to Better Protect Children Online. In August 2018, an independent evaluation will take place to review the output of all signatories to the Alliance. Twitter suggested that the code should cover the full spectrum of communications, content and information society services used by people in the UK.

ISPs including BT, TalkTalk and Sky were all supportive of transparency reporting as long as reports had clear and understandable metrics in place to ensure easy comparison. Sky felt that for this to work some basic information-gathering powers would be needed in much the same way that Ofcom has powers to request information of other communication providers, underpinned by a sanctions regime for non-compliance.[9]

It is important to recognise that the leading social media companies are already taking steps to improve their platforms. They have developed important technical tools and successful partnerships with charities to deliver online safety initiatives - with plans to do more in this area. The growth of AI and machine learning means that algorithms are used to remove harmful content more quickly. These measures are having a positive impact. For example, Google highlighted in their consultation response that 98% of the videos they removed for violent extremism were flagged by machine-learning algorithms, and they have begun to use this technology in other areas such as child safety and hate speech.

Throughout the consultation period, we worked closely with technology firms and we have seen progress in relation to online safety in the past year. In particular, responses from the technology industry, including trade groups, flagged their existing work relating to:

  • Partnerships with UK charities to deliver online safety education;
  • Work with the Royal Foundation’s Taskforce on the Prevention of Cyberbullying;
  • Participation in existing self-regulatory initiatives such as the ICT Coalition for Children Online and the European Commission’s Alliance to Better Protect Children Online;
  • Terms and conditions and community rules against bullying and other types of unacceptable behaviour;
  • The ability for users to report unacceptable behaviours and consequences for those who participate in this behaviour;
  • Tools and features relating to privacy, security and blocking which enable users to control their experience;
  • Safety centres online which provide guidance to users.

The consultation responses highlighted pockets of best practice including:

  • Facebook told us that anyone who reports content is told what action has been taken and may receive additional resources to help them resolve their concern. People can also feedback on how they think Facebook did, or can appeal decisions.
  • Google said that they have been working on technical solutions which can be shared across the industry. For example, in March 2017, Alphabet released a new tool called Perspective - an API that gives any developer, harassment and comment moderation tools. These tools work to tackle abusive communications from both named and anonymous accounts.
  • Twitter described the tools it has specifically designed to detect and remove repeat offenders on its platform and those attempting to return following a permanent suspension.

Initiatives such as these should be evaluated and effective best practice shared so that other platforms can decide whether to introduce similar technology. We are currently considering how Government can facilitate this through our ‘think safety first’ work and UKCIS. Alongside this, we would also like to see the largest platforms taking this work forward and smaller platforms actively seeking out advice on best practice.

Concerns

We welcome the efforts that these companies have taken to protect users and they have learnt from feedback about online harms taking place on their platforms.

However, the range of industry responses identified three main concerns:

  1. Only a small group of the largest companies are engaged with our work on online safety;
  2. Companies present a strong track record on online safety but this appears to be at odds with users’ feedback on their experiences;
  3. Government needs to create a level playing field so that all companies are meeting consistent standards.

The NSPCC’s NetAware programme identifies the main sites, apps and games that children use the most. Of the 39 platforms that were identified by the charity in 2017, only five companies responded to our consultation (Facebook, Google, Oath, Microsoft and Twitter), representing 12 of the platforms. This presents a major concern - we want all platforms, particularly those popular with children, to be engaged with our safety work.[10]

Evidence base

Our research has shown significant gaps in existing evidence, not least because online harms can change rapidly, and many key trends are too new to have been the subject of longitudinal studies. Our upcoming programme of work on internet safety will include undertaking new research, on which we will be working closely with UK Research and Innovation.[11]

A literature review was conducted by UKCISS in addition[12]

Companies and voluntary measures

The consultation responses highlighted pockets of best practice including:

  • Facebook told us that anyone who reports content is told what action has been taken and may receive additional resources to help them resolve their concern. People can also feedback on how they think Facebook did, or can appeal decisions.
  • Google said that they have been working on technical solutions which can be shared across the industry. For example, in March 2017, Alphabet released a new tool called Perspective - an API that gives any developer, harassment and comment moderation tools. These tools work to tackle abusive communications from both named and anonymous accounts.
  • Twitter described the tools it has specifically designed to detect and remove repeat offenders on its platform and those attempting to return following a permanent suspension.

Initiatives such as these should be evaluated and effective best practice shared so that other platforms can decide whether to introduce similar technology. We are currently considering how Government can facilitate this through our ‘think safety first’ work and UKCIS. Alongside this, we would also like to see the largest platforms taking this work forward and smaller platforms actively seeking out advice on best practice. …

However, the range of industry responses identified three main concerns:

  1. . Only a small group of the largest companies are engaged with our work on online safety;
  2. . Companies present a strong track record on online safety but this appears to be at odds with users’ feedback on their experiences;
  3. . Government needs to create a level playing field so that all companies are meeting consistent standards.

The NSPCC’s NetAware programme identifies the main sites, apps and games that children use the most. Of the 39 platforms that were identified by the charity in 2017, only five companies responded to our consultation (Facebook, Google, Oath, Microsoft and Twitter), representing 12 of the platforms. This presents a major concern - we want all platforms, particularly those popular with children, to be engaged with our safety work.[13]

Government conclusions

The Government has made clear that we require all social media platforms to have:

  • Terms and conditions that provide a minimum level of protection and safety for users;
  • Clear rules on unacceptable content and conduct;
  • Robust enforcement of their standards.

The companies’ responses suggest they already meet these expectations. However, the disconnect between user and industry responses strongly suggests that companies need to do more to manage the content and behaviours on their platforms.

The consultation has reinforced the Government’s view that we are right to bring forward the social media code of practice and transparency reporting which the Prime Minister announced in February 2018. The code will set a higher bar in terms of the safety provisions and terms and conditions that we expect platforms to offer users, and transparency reports will enable us to establish which companies are meeting these standards.

We believe that companies must take a more proactive approach, pre-empting potential issues on their platform before they occur. Our ‘think safety first’ approach will therefore focus on companies embedding safety considerations into their product development. As technologies such as machine learning become more sophisticated, we expect companies to use these to identify and remove harmful content more quickly.

The Government is also clear that we need a new, more strategic and coordinated approach to online safety funding: current approaches to funding lead to duplication of effort and gaps in provision. Nevertheless, given the mixed responses to the issue of a social media levy, we believe there is value in taking more time to gather evidence and analysis from users, companies and charities.

We will continue to ensure that comprehensive online safety education is available to all children, as well as considering how we can best support parents in tackling internet harms. This work fits into our wider activities considering online video games, gambling and the work of civil society in supporting everyone being able to access the benefits of the Internet while also staying safe. We are reforming UKCCIS and will continue to promote a ‘think safety first’ approach for all companies. Further details on these activities will be set out in our forthcoming White Paper.

Online safety forms just part of the work which we’re taking forward under our Digital Charter. Our Charter will address wider issues such as tackling disinformation and considering innovations in digital advertising.

One of the other issues raised by the NSPCC in the consultation was the border between legal and illegal conduct online. The Green Paper focused on harmful but potentially legal content and conduct, but the initiatives which we are taking forward will also support the work being taken forward to tackle illegal harms. In addition, DCMS and Home Office will continue to work closely together to ensure that we are jointly addressing activities which could escalate to become illegal.

We are concerned that, especially for children and young people, being exposed to harmful content can have negative impacts on mental health and wellbeing. We are therefore clear that there needs to be greater focus on preventing such online content from being published in the first place. This requires companies to proactively deny access to those who abuse their services; to develop response mechanisms and apply advances in technology to automate these approaches; and for the larger companies to share these tools and techniques with other companies.

We have seen some success through the voluntary online safety approach. For example, we have seen real value from our partnerships with voluntary sector organisations such as the IWF, the UK’s global leadership in the WeProtect Global Alliance, and our strong cooperation with the tech sector through the industry-led Global Internet Forum to Counter Terrorism]]. But we have also made clear that we are prepared to legislate where necessary. As the Prime Minister announced in January 2018, we are looking at the legal liability that social media companies have for the illegal content shared on their sites. The status quo is increasingly unsustainable as it becomes clear many platforms are no longer just passive hosts.

Whilst the case for change is clear, we also recognise that applying publisher standards of liability to all online platforms could risk real damage to the digital economy, which would be to the detriment of the public who benefit from them. That is why we are working with our European and international partners, as well as the businesses themselves, to understand how we can make the existing frameworks and definitions work better, and what a liability regime of the future should look like. This will play an important role in helping to protect users from illegal content online and will supplement our Strategy.[14]

Government proposals

A global leader

The UK is not the only country affected by these issues. We know there are a wide range of countries, including Ireland, Australia, France and Germany, who are tackling the same challenges on some of the same platforms. We aim to develop a defined set of responsibilities for social media companies to provide clarity on the safety measures we expect within a well-functioning digital economy. In doing so, we will continue to work closely with allies, including in the OECD, EU and G7, on this important work. By taking a leading role globally, we will encourage others to align with our approach - we will demonstrate the advantages of promoting online safety within a framework that also protects human rights, in particular freedom of expression.

Moving forward

As we continue to develop the Digital Charter, we remain positive about the enormous benefits the Internet brings to our society and economy. But Government has an important role to play in helping to shape an online world that works for everyone, and one that reflects the values we live by and the behaviours we expect in the offline world. As problems such as cyberbullying, abuse, trolling and sexting continue to cause harm to citizens’ mental health and wellbeing and issues such as disinformation, the mass misuse of personal data, screen time and lack of age verification for social media platforms grow in prominence, it is clear that we need to continue to tackle these issues head-on and evolve our work on online safety.

That is why we are announcing our intention to publish a White Paper before the end of this year to set out more definitive steps on online harms and safety. DCMS and Home Office will jointly work on a White Paper which will set out our proposals for future legislation. It will give us the opportunity to draw together a number of different aspects of Government work, including: reporting on progress of our review of platform liability for illegal content; responding to the first stage of the Law Commission Review of abusive communications online; and working with the Information Commissioner’s Office on the age-appropriate design code which is part of the Data Protection Bill. It will also allow us to draw existing work on safety together with work on the new, emerging issues, including disinformation and mass misuse of personal data and work to tackle online harms.

The White Paper will also set out plans for upcoming legislation that will cover the full range of online harms, including both harmful and illegal content. Potential areas where the Government will legislate include the social media code of practice, transparency reporting and online advertising. We believe that these measures will bring about significant benefits for all users by setting clear rules on how harmful and illegal behaviour and content should be dealt with. The code and transparency reporting will also support platforms by creating a level playing field and ensuring that all companies are contributing to safety improvements, not just the largest providers.

We will be considering new policy areas on safety that have been identified during the consultation process that warrant further work, including: age verification to assist companies to enforce terms and conditions; policies aimed at improving children and young people’s mental health, including the impact of screen time; tackling issues related to live-streaming; and, further work to define harmful content.

Through this process, the Government remains firmly committed to collaborating with industry to improve online safety, in particular looking to them for answers to technological challenges, rather than Government dictating precise solutions. We are very grateful to all of the companies who have spoken to us so far in relation to online safety issues. We will continue to seek their views on our policies through a series of workshops, focus groups and consultation events ahead of the publication of our White Paper. We know that many companies are already spending time thinking about how best to tackle online harms and have partnered with experts to understand the problems in more detail. We want to continue to learn about ongoing work and will use the convening power of Government to ensure that best practice is widely shared.

We plan to work closely with industry, academia, civil society, charities and other interested stakeholders ahead of the publication of the White Paper. We will continue to work with interested parties to refine our policies where we already have a clearly defined direction of travel, including the code of practice and the transparency reports, and to progress areas of work which need further development. We will ensure that vulnerable users, particularly children, remain a central consideration in our policies.

As we take this work forward, we want to create a policy landscape where our proposals effectively tackle online harms in a changing digital landscape whilst ensuring the Internet is free, open and accessible. We are determined that the UK should lead the world in innovation-friendly regulation that protects users and enables the digital economy and new technologies to thrive.[15]

Scope

The government believes that smaller platforms used by vulnerable users are a particular problem:

We want to see greater consistency across platforms so that users understand what standards of behaviour are acceptable across the whole online ecosystem and what can be done to tackle content which falls short of this. The response to our consultation showed that individual companies have made progress and the majority of industry respondents have terms and conditions relating to online safety in place already. There are also a number of voluntary initiatives and pieces of guidance that have helped to set direction for online safety. But the consultation revealed that there’s a gap between users’ experiences online and the response from the companies.

Key to this discrepancy is whether the measures that have been taken are consistent across technology firms of all sizes. To date, it’s been the largest platforms who have taken the biggest steps forward in relation to online safety. This is understandable as they have the largest user bases and the greatest amount of resource to invest in safeguarding their users. It was also these larger, more established technology companies that provided responses to our consultation and highlighted their achievements in this area.

We know that the technology sector isn’t static and that children in particular, often seek to join the latest, innovative, smaller platforms. These tend to be the platforms which haven’t yet developed a full range of protections which leaves users vulnerable. In recent years, we have seen a number of platforms grow significantly in a very short space of time and safety features are often only added at a much later stage.[16]

Other priorities

The Charter also brings together a broad, ongoing programme, which will evolve as technology changes. In addition to online harms, our current priorities include:

  • Digital economy – building a thriving ecosystem where technology companies can start and grow;
  • Liability – looking at the legal liability that online platforms have for the illegal content shared on their sites, including considering how we could get more effective action through better use of the existing legal frameworks and definitions;
  • Data and AI ethics and innovation – ensuring data is used in safe and ethical way, and when decisions are made based on data, these are fair and appropriately transparent;
  • Digital markets – ensuring digital markets are working well, including through supporting data portability and the better use, control and sharing of data;
  • Disinformation – limiting the spread and impact of disinformation intended to mislead for political, personal and/or financial gain;
  • Cyber security – supporting businesses and other organisations to take the steps necessary to keep themselves and individuals safe from malicious cyber activity, including by reducing the burden of responsibility on end-users.

The Charter will not be developed by Government alone. We will look to the technology sector, businesses and civil society to own these challenges with us, using our convening power to bring them together with other interested parties to find solutions. This collaborative approach will ensure that the UK is both the safest place to be online and the best place to start and grow a digital business.[17]

UKCISS

UKCISS has been superceded by the UK Council for Internet Safety. This was outlined in this paper:

A number of the consultation responses from children’s charities and campaigners flagged concerns about expanding the remit of UKCCIS to cover all users. As set out in the Internet Safety Strategy Green Paper, we believe that expanding the remit of UKCCIS to cover adults, as well as children, will bring significant benefits and it’s important that the Council is aligned to the scope of the Strategy. We will maintain a focus on children’s needs by:

  • Ensuring children have the opportunity to share their views with the Executive Board;
  • Maintaining at least one working group which will focus on children’s online safety;
  • Ensuring that children’s charities continue to be represented on the Executive Board;
  • Setting priorities relating to children’s online safety and monitoring the Council’s progress on these.

Putting these provisions in place will take time as we are aware that establishing a forum through which children’s views can be gathered needs to be properly resourced and structured.[18]

Other work

The response details work carried out regarding illegal content by the Home Office.

Vulnerable groups and their carers

The needs of young people and schools, (pp33-38) people in care (pp38-39) parents (pp39-40) and mental health provision (pp40-43) are discussed in some detail.

On mental health the response says:

To discuss and tackle the issues around social media and mental health, DCMS and DHSC convened a series of roundtables with a working group of the main social media and technology companies (including Google, Facebook, Twitter, Oath, Microsoft, Apple and Snap Inc). These meetings discussed children and young people’s online safety, with a particular focus on the impact that social media products have on children’s mental health. The work focused on the themes of age verification, screen time and cyberbullying/harmful content.

The Government welcomes the companies’ engagement within this forum, including the letters companies wrote to the Secretary of State for Health and Social Care following the working group meetings. However, while some companies are taking steps towards addressing some of these important issues, we are clear that there is further action they could take in this area.

DfE will therefore continue to work with the DHSC to explore a range of options to take this forward as part of our Internet Safety Strategy.

Linked to the issue of long periods of time spent online, and to better understand the relationship between social media and the mental health of children and young people up to 25 years old, the Chief Medical Officer will be leading a systematic review to examine all relevant international research in the area. The review will inform a report in this area, due for publication next year.[19]

Ratings and mental health

The British Board of Film Classification (BBFC) take into account mental health issues relating to young and vulnerable in its classification decisions across websites and other audio-visual material. The BBFC draws on expert advice in order to do this, for example through maintaining a close relationship with the Samaritans and other suicide prevention experts in relation to classification policy on issues relating to suicide. In addition to this, BBFC have commissioned research to inform their classification policy, including into the potential effects of depictions of sexual, sexualised and sadistic violence in film and video.

The BBFC and the Dutch regulator, NICAM, have developed You Rate It (YouRi) in order to provide age ratings for user generated content (UGC), in recognition of its being an increasingly significant source of content online. YouRi is a tool that provides age ratings for UGC which is available on online video-sharing platform services, and takes the form of a simple questionnaire, designed to be completed by those uploading videos onto a platform, by the crowd, or by both. The tool was piloted by the Italian website 16mm.it with encouraging results: 81% of all videos available on the site received a classification during the pilot period. YouRateIt is available to video and social media platforms as part of their content compliance and age labelling mix and BBFC and NICAM are looking in 2018 to find a further partner, ideally in the UK or in continental Europe, to undertake a larger scale trial. [20]

Further research

The paper identifies the need for further research in several fundamental areas. For instance on scope:

In advance of the White Paper, we will be commissioning research to establish more detail on UK user numbers to understand the pattern of growth and decline in popular platforms. We will use this information to inform our legislative proposals. We will also assess how successful the take-up of the code and report is on a voluntary basis over the next six months to identify the types of actions which may be required to secure compliance.[21]

Here the paper identifies the lack of the evidence on 'trolling' and other 'adult harms':

We have commissioned a rapid evidence assessment which considers the prevalence and impact of online trolling. The report will be published in due course, but the initial findings highlight that more research is needed to gain a clearer understanding of what trolling is, how it is affecting UK society and what needs to be done to effectively counter its negative impacts.

Therefore, in this coming year, we will be prioritising the funding of new research into adult harms and drawing on available knowledge about effective strategies for tackling this from across both Home Office’s work on illegal harms and our wider work on the Digital Charter.  We will be working with partners such as the Behavioural Insights Team to explore ways in which Government, companies and charities can encourage users to improve their online behaviour. In addition, we will focus on new research to tackle the current evidence gaps highlighted by the UKCCIS Evidence Group’s literature review in relation to children’s online activities, risk and safety.[22]

Draft Code of Practice

Full text: Draft code of practice for providers of online social media platforms

The draft code of practice was intended to be a statutory but voluntary scheme.

Draft transparency reporting template

Full text: Draft transparency reporting template

Commentary

Free expression

While free expression is acknowledged as a value, the response gives no information about the potential conflicts between the policy goals and free expression. However, the majority of the response concentrates on areas where these conflicts are at their most intense, where content is legal, but may be regarded as harmful.

The difficulties of platforms or regulators adjudicating between parties with different views about their content are not discussed.

The lack of attention to free expression is especially stark given the transparency reporting template, which is an obvious opportunity to derive information about appeals and accuracy of decision making.

Much of the concern identified in the report is centred on the impacts on vulnerable groups, including children and people with mental health concerns. However, if standards of public expression are focused on creating environments that minimise the risks for these groups, the impact on free expression could be very considerable.

Technical solutions and free expression

The response mentions technical solutions to content problems a number of times. It cites a high level of support for measures among consultation respondents. However, the response does not reflect on the limitations of technical measures, nor on why and when they can be particularly successful. Rather, there is a simple assumption that they are useful and should be developed further.

Technical measures are well-known to be very prone to produce problems when they are not being applied very narrowly. Machine learning and 'artificial intelligence' still look for proxy information and pattern matching, and thus do not judge content in context directly. The limitations of technical measures in moderation, for instance for identifying copyright material, are well-known.

When applied at scale, the impacts of technical solutions to content are likely to create significant impacts on free expression, as a certain amount of content is misidentified. The response makes no mention of this, and does not therefore discuss what kinds of mitigations may be needed.

Problems and limitations of moderation

Key to success of this policy is the accurate application of moderation policies. This is discussed in passing in the response, from the perspective of "enforcement" of terms and conditions. Similarly, the speed and volumes of complaints, alongside the types of material identified, are identified as measures to be reported on in the transparency template.

However, there is no reference at any point to the kinds of challenge that moderation faces, whether by human or machine. These are central to the delivery of the policy, making this a striking omission from the response.

Issues that need to be addressed include the speed at which decisions are made; the need for review processes; the underlying criteria by which content is judged. It is reasonably clear that moderation at scale tends to apply simple criteria such as the presence of blood, certain kinds of nudity, or key offensive terms without mitigating commentary, rather than allowing for in-principle judgements to be made. This is understandable, given the desire for speed, but can lead to mistakes, either failure to remove or over-removal of content.

Regulatory approaches

The government wishes to legislate and directly regulate social media. At the point the report was written, the focus was on a "social media code of practice". Other concepts are mentioned in passing only, such as a "duty of care".

The government dismisses the concept of self-regulation without a great deal of reflection on the concept, and what kinds of self-regulation have succeeded, and which have not. The response in fact presents differing views on whether self-regulation has previously succeeded, but accepts without particular discussion that self-regulation will not work.

However, self-regulation may be much more likely to achieve the goals government wants, given the evidential difficulties of establishing clear harms, and the underlying desire for social media platforms to regulate what might broadly be called "civility".

Lack of clear evidence

The lack of evidence and the need for further evidence in areas where the government wishes to intervene is striking. The government sees discrepancies between the responses of major platforms and public concern, but does not reconcile these.

There is a tension between the public justification for action being apparent concern with major social media platforms, where the policy focus is placed on smaller platforms who are seen as less responsible.

The response has surprisingly little detail on specific needs around specific harms.

References

  1. Government Response to the Internet Safety Strategy Green Paper, gov.uk May 2018
  2. Ibid, p6-7
  3. Ibid, p7
  4. Ibid, p7
  5. Internet Safety Strategy – Green paper gov.uk, October 2017
  6. Ibid, Annex A p61
  7. Ibid, Annex A p61
  8. Ibid, pp11-12
  9. Ibid, pp11-12
  10. Ibid, pp13-14
  11. Ibid, p11
  12. Children’s online activities, risks and safety A literature review by the UKCCIS Evidence Group October 2017
  13. Ibid, p14
  14. Ibid, pp15-16
  15. Ibid, pp17-18
  16. Ibid, p21
  17. Ibid, p19
  18. Ibid, p19
  19. Ibid, p41
  20. Ibid, pp41-42
  21. Ibid, p23
  22. Ibid, p60