Content-control software

(Redirected from Adult content filters)

Content-control software is technical measures to that attempts to prevent the access of (otherwise legal) internet content deemed unsuitable for those under the age of 18. The UK government is considering proposals to make these measures mandatory, and may appear in some form in the forthcoming Communications Bill. Known as adult content filters or child protection filters, often referred to as censorship especially in the case of default blocking. Other names such as porn filters or pornwall are used, but are generally discouraged for not reflecting the wider scope of these systems.


While evidence of actual consequences of children's exposure to "adult content" is outside the scope of Open Rights Group, the desire of many parents to control their children's access to such content is acknowledged.

Definition of adult content

While the discussion of adult content filters is usually presented as "porn blocking", the types of content affected is usually much broader:

  • pornography (itself subject to a range of definitions)
  • "hate sites"
  • sites promoting gambling
  • sites promoting alcohol (including the websites of pubs)
  • sites promoting self harm, suicide etc
  • sites containing "un-policed" user generated content (forums, etc)
  • sites that contain information about bypassing filters
  • ...

See Content categories blocked by UK ISPs


In the context of content filtering, Active Choice requires that an ISP customer be presented with a yes/no choice such that the application, or non-application, of filtering to a network connection cannot be regarded as the default. (However, the presumption of the question may differ. "Do you want to block access to adult content?" and "Do you intend to access adult content?" effectively represent the same choice.)

(About 30% of UK households have children so, assuming they're not over-represented as internet users, the likely voluntary take up of any "child-protection" measure is unlikely to exceed that.)

The concept of opt-in and opt-out is muddied in the case of device-level filtering. Installation of software is commonly presented as an activity requiring user consent, therefore device-level filtering would usually be considered opt-in. The choice in this case would be to install the software or not.

As presented by government, the choice to opt-in refers to "opting-in" to adult content, not to opting-in to filtering. Active choice plus means the same as opt-in, but implies that a question will be presented to users (with the default choice being to opt-out of adult content). A mandating of active-choice-plus would presuppose the deployment of network-level controls as application of this policy would be problematic for device-level control.

Transparency and accountability

Services that perform this function should also provide

  • tools to check if sites are blocked (e.g. O2)
  • tools to allow site categorisation to be seen (e.g. O2)
  • publicly documented reasons for categories being subject to blocking
  • provenance of categorisation sources (country,companies)
  • mechanism for anyone (not just customers) to request review of categorisation
  • data retained on deployed blocking and effect
  • independent auditing and reports of the above (e.g. ISP code of practice)
  • support for a universal whitelist on unblockable sites, akin to phones allowing emergency calls. (e.g.



  • Underblocking
    • adult content not being blocked due to lack of categorisation or technical limitation
    • https removes ability to do fine-grain filtering at anything other than device-level (absent TLS interception)
  • Overblocking
    • increased use of https removes the ability to do fine-grain network-level filtering - policy response may result in overblocking
  • Suitability (e.g. material unsuitable for 12 year-old may be suitable for a 17 year-old.)


There are common instances in which users might be surprised at being blocked (overblocking) either due to an unawareness of their provider's blocking criteria, as a genuine mistake in site classification, or arising from ambiguity in the classification task (at what does something become porn? or a hate site?). Without clear stated criteria for classification, it's difficult to know if blocks are "mis-classified" or not. This may leave filter operators vulnerable to accusations of political motivations for site blocks.

It's also likely that the sources for classification details may not be sourced from the UK, and therefore possible that interpretation may differ from cultural norms.

Technical implementation


The model appears to be automated site categorisation, with human checking for sites reported as mis-categorised. See, e.g. Symantec RuleSpace.

Device-level control

  • Software installed by whoever has administrative control of a device (PC, smartphone)
    • Devices may exist for which no device-level content filtering is available.
  • OS level
  • DNS level
  • Browser level
  • Filtering of TLS-delivered content
    • network/router level filters are unable to analyse TLS streams
  • lack of "one-size-fits-all" solution
  • potential exists for cross-industry standardisation in the future
  • Supports Web content labelling as a distributed means for classification

Home router-level control

  • Conventionally use DNS-level controls, either using ISP provided addresses, router manufacturer configured addresses, or third-party services selected by the device user.
  • Can, in theory, provide different settings per device.

DNS-level control

  • Can be potentially enabled at device, router, or network level.
  • Doen't require any additional software at the user end.
  • Use of DNS standard lowers entry for market competition
  • Various methods exist for bypass, so ineffective against a sufficiently motivated "policy violator".
    • in some cases this can be as simple as reconfiguring a device to always use alternative resolver addresses
  • examples of commercial products

Network-level filtering

  • Filter provider tied to ISP
    • May not necessarily provide granular control
    • TalkTalk appears to offer time-of-day filter deactivation, analogous to "watershed" in broadcasting.
    • Cannot provide a per-device service when IPv4/NAT is being used
  • DNS based controls
    • DNS based controls imposed at network-level may also imply third-party DNS blocks, or DPI-based DNS filtering.
    • may be combined with a packet-level block on DNS traffic to third-party DNS servers
  • IP blocking
    • Issues with cloud-hosting wherein the IP addresses of content may be anticipated to automatically change over time. For example the addresses of load-balancers used by Amazon-hosted sites.
  • Cleanfeed-style DPI filtering
  • Possible deployment of TLS interception for bypassing https encryption issues.
  • Introduces additional level of complexity / point-of-failure
  • Clear civil liberties issues in the development of and deployment of equipment functionally identical to those deployed by regimes that practice political censorship.
    • "crossing the Rubicon into censorship"[1]
  • Issues with choice/competition/liabilities if legally mandated
    • where there is no commercial gain in filter offering, direct investment in filter efficiency is likely to be minimal
  • Inevitability of "feature creep"
    • later requirements that some blocks apply to all, not just opt-in
      • e.g. WikiLeaks type sites, Piracy sites, "hate sites"
      • Blocks of culturally offensive, yet legal, content. Would ISPs choose to block "Danish editorial cartoons"?
    • "emergency" blocking
      • e.g. the ability for the government to block access to Facebook or Twitter, mooted by the PM after the London riots.
    • automated block requests from copyright claimants
    • injunctions/super-injunctions resulting in secret blocking
    • non opt-in filter sets
      • e.g. minimal whitelist-based "essential services" filter as an alternative to DEAct disconnection?


Site-level control

Sites such as Google offer adult-content filtering within their sites[2]. Some filters attempt to enforce the use of "safe search" modes by, for example, rewriting search URLs or sending specific headers. However these techniques are unlikely to work, unless browser/device level controls are used, where the sites in question are using https transport. (Excepting devices that use TLS interception.)



See also