top of page
Search
Writer's pictureStephen Kinsella

First thoughts on the revised Online Safety Bill

Updated: Apr 11, 2022

The publication of a revised Online Safety Bill, last Thursday, was an important milestone. It’s taken quite a while to get here - almost 3 years now since the publication of the Online Harms White Paper, back in April 2019, which itself came almost 2 years after 2017’s Internet safety strategy green paper.


Overall, we think it’s been worth the wait. The revised Bill appears to be a significant improvement on the earlier draft, taking on board many of the recommendations made during the process of pre-legislative scrutiny. This demonstrates the value of giving parliament proper time to conduct pre-legislative scrutiny, and the government’s receptiveness to parliamentarians’ feedback reflects well on the ministerial team and civil servants at DCMS.


The revised Bill itself runs to 225 pages, and there are an additional 126 pages of explanatory notes. Partly because the government has opted to incorporate many revisions whilst retaining the architecture of the original draft, it is not the easiest document to digest. So at this point we will only offer some initial commentary, focusing on the sections most relevant to our own proposals for measures to tackle the misuse of anonymity to abuse other users or spread disinformation.


We are pleased to see that the government has followed through on its recent promise to include measures to tackle misuse of anonymity. Section 14, in Part 3 of the revised Bill, introduces a “User Empowerment Duty" requiring the largest (Category 1) platforms to offer their users options to “filter out non-verified users”. Section 57, in Part 4, introduces a “User Verification Duty”, requiring Category 1 platforms to “offer all adult users of the service the option to verify their identity”. Section 58 requires Ofcom to produce guidance on how the platforms can comply with this duty.


This is a very significant step forward, although some questions about the details remain.


The rationale for the separation of the two duties regarding anonymity, into two different parts of the Bill, is not immediately obvious. The Explanatory Notes are silent on this. One consequence of the separation is that the “filtering” duty, situated in Part 3, is nested within the same framework as the other “Duties of Care”, and therefore covered by a Code of Practice. The “User Verification Duty”, on the other hand, is situated in part 4, and is therefore instead covered by a separate, standalone piece of “guidance”. We would like to understand better the reasons for this, and how the two will work together in practice.


S57(2) states that “the verification process may be of any kind (and in particular, it need not require documentation to be provided)”. We assume that the government’s intention here is to make clear they do not seek to impose a one-size-fits-all approach. This wording should also serve to reassure those who’ve raised accessibility concerns relating to those users who do not possess ID documents. We strongly support allowing for flexibility, innovation, and choice in how verification is implemented, including approaches which do not rely on ID documents. However, flexibility will need to be combined with some minimum standards to ensure that the verification is meaningful. Platforms mustn’t be allowed to claim any old mechanism constitutes “verification”. At the moment the Bill offers no definition at all of what constitutes “verification”, and platforms could try to exploit this to maintain their current approaches.


We think this is a realistic concern because, for example, Twitter has recently claimed repeatedly, including to the Home Affairs Select Committee, that a 2-factor login process by email or SMS already counts as “verification”. Twitter’s representative claimed to the Home Affairs Select Committee, on 8 September 2021, that users “have to verify to get on the service” - a claim which we debunk here.


The Online Safety Bill will therefore need to give Ofcom the powers independently to set some standards for what counts as “verification”, for purposes of compliance with the User Verification Duty. The simplest way to do this seems likely to be through strengthening Section 58, regarding Ofcom’s guidance, to stipulate that Ofcom must enforce minimum standards for verification. In addition a basic definition of “verification” could perhaps be added to the list of terms defined in Section 189.


Another consideration, which could also potentially be addressed through the Ofcom guidance, will be the role of third party verification providers. We see many advantages in promoting the growth of a market in third party providers in terms of promoting innovation, user choice, and competition. UK providers are well placed here to develop the market. There will be similar benefits for Age Verification processes.


The “filtering duty”, in s17, does not explicitly mention that a users’ verification should be visible to other users. This is an important part of “user empowerment”, which would give all users more information to help them assess the reliability of a particular account. The current situation in Ukraine brings home the importance of reining in the ability of foreign government-backed operatives to create fake accounts, which then purport to be UK-based in order to spread disinformation. It may be that the government envisages this to be something which could be covered in the Code Of Practice, but we see no reason at present not to include it on the face of the Bill.


Finally, we understand from recent conversations with DCMS that the government has chosen to apply the new duties only to Category 1 platforms because it considers this to be more proportionate. Our starting point remains that a right to verify, and to avoid non-verified users, should be a core safety feature of any platform - and something that new platforms are obliged to consider from day one. However, if the government is committed to it only being applied to Category 1 platforms, then they need to improve the definition of Category 1. At present Category 1 is defined purely by size. If it is to serve its stated purpose of ensuring that the platforms that pose the greatest risk are subjected to the greatest regulatory supervision, then it should be predicated on size or risk, not size alone.


These questions and concerns are important, but they are the kind of things we would expect to be identifying at this stage in the legislative process. Assuming that the parliamentarians maintain the level of rigorous scrutiny which they have brought up to now, and the government continues to be open to feedback, we are confident they can be ironed out as the Bill progresses through parliament.

Commentaires


bottom of page