top of page
Search
Writer's pictureStephen Kinsella

Is Ofcom about to delay action on fake and anonymous accounts until 2027?

Online safety legislation in the UK has been a slow process. After steady progress from Green Paper in 2017, to White Paper in 2019, draft legislation finally emerged in 2021. This was followed by pre-legislative scrutiny and a stop-start legislative process before the Act finally passed towards the end of 2023. When Theresa May’s government stated the aim in 2019 of making the UK “the safest place in the world to be online”, it felt like there was a genuine chance for the UK to deliver a world first - yet by the time the OSA finally passed into law, the European Union’s own online safety rules had been on its statute books for almost a year.

 

All of this meant that when, in February 2022, Clean Up The Internet celebrated the “user verification duty” being announced as a late addition to the Bill, our expectations about the pace of change had already been adjusted. We were delighted to see measures, broadly in line with our own proposals, on the face of the Bill, and the direction of travel this would set. But we were realistic that this wouldn’t mean an instant change for users, despite the political promises. Platforms, given their resistance to user verification measures up to then, were unlikely to respond to this direction of travel by implementing their own measures voluntarily ahead of time. A regulatory regime would take time to set up.

 

Even so, if we’d be told in February 2022 that by the end of 2024 platforms would still not be required even to offer their UK users options to verify their identity, and that UK users would still not have any more tools to spot and avoid fake and anonymous accounts than they did three years earlier, we’d have been disappointed. And if we’d been told, as we neared the end of the third year since the user verification duty was announced, that it could be another 3 years before it was implemented, we’d have been dismayed – as would those politicians who devoted so much effort to enacting the legislation.

 

The slow, stop-start legislative process did at least mean that Ofcom had ample time to prepare for its new role. Ofcom’s leadership itself talked up the efforts to hit the ground running in terms of recruitment and preparatory thinking. A delay in commencement of the legislation meant at least that first draft codes would be out for consultation “very shortly after commencement” rather than “within 100 days” because they had “had more time to prepare”. So when the Online Safety Act achieved Royal Assent in October 2023 we dared hope that with the legislation passed, the pace of change might pick up a bit, and that perhaps by the end of 2024 users could be starting to see some tangible improvements in their online experience.

 

Sadly, that’s not quite how 2024 has turned out. Ofcom did indeed put out its draft illegal content codes for consultation within days of commencement. But whilst they contained some strong analysis of the risk factors driving illegal activity on social media, the proposed codes themselves were desperately underwhelming. The consultation ran to hundreds of pages - easy for the vastly resourced major platforms to engage with but a tall order for charities, small businesses, or victims' groups. But whilst the wordcount was high, the ambition was low. Amongst the more glaring gaps were fake and anonymous accounts. These were acknowledged by Ofcom to be a “stand out” risk factor, associated with a very long list of “priority offences” including terrorism, harassment, hate offences, sexual exploitation and abuse, fraud, and foreign interference. Yet the draft codes of practice failed to propose significant measures to address this, limiting itself to a rather modest measure aiming to address impersonation of notable figures or brands under “paid verification” schemes.

 

Ofcom has to its credit acknowledged some of the criticism of its proposals’ timidity and the slow pace of change. It has responded by stating its intent to “iterate up” its illegal content codes quickly, with a consultation on additional measures in 2025. But it has also indicated that we should not expect to see many changes from the draft put out to consultation when it publishes the first version of these codes, which are due to be published imminently and come in force in March 2025. That means UK social media users can only expect their experience in 2025 to change in line with the modest requirements of the first version of the codes. There will be no requirement in 2025 for platforms to offer all their UK users options to verify their identity, to see who is and who isn’t verified, or to filter out interaction from non-verified accounts. When it comes to fake and anonymous accounts, UK users in 2025 won’t have much more information or tools at their disposal to protect themselves than they did three years ago.

 

So when is the earliest that UK users could now benefit from options to verify their identity, and more information and options to avoid fake and anonymous accounts? Ofcom’s justification for failing to even explore a measure requiring platforms to offer user verification in its illegal content proposals was that the OSA requires the measure for Category One platforms, which they would come to in “phase 3” of implementation. In other words they are leaving until last in the queue a risk factor which they themselves have acknowledged "stands out" as likely to cause most harm. At that point, Ofcom’s “roadmap to regulation” set out that it was planning to consult on phase 3 in 2025, with the measures coming into force at the earliest in 2026.  Last month, Ofcom announced changes to the roadmap which included a year’s delay to phase 3, meaning consultation “up to a year later than originally planned” in 2026 and measures not in force before 2027. And who can be confident even that will not slip even further?

 

We argued on this site, and also in our formal response to Ofcom’s consultation, that it made no sense not to consider a measure in the illegal content codes with the potential to address a “stand out risk factor” for priority offences, on the grounds that they’d be getting to it at a later date for a small subset of platforms. Delaying the measure would mean the harms enabled by fake and anonymous accounts - including priority illegal offences - would continue to impact UK users. Limiting the measure only to Category One platforms, rather than any platform where fake and anonymous accounts were a relevant risk factor in illegal behaviour, would limit the protection that UK users would ever experience. This was already a bad outcome for UK social media users under the previous timetable. A further 12 month delay makes it even worse

 

The glimmer of hope was that at the same time as announcing the delay to phase 3, Ofcom also announced a “plan to launch a further consultation which builds on the foundations established in the first Codes in spring 2025”. Given that Ofcom’s original justification for failing to consider a measure in the illegal content codes was so weak, and that phase 3 is now subject to further delays, including a user verification measure amongst those consulted on in Spring 2025 would seem to be logical.

 

The content of the Spring 2025 consultation has not yet been confirmed, and is perhaps still to some extent up for grabs. However, the mood music is not encouraging. Southport MP Patrick Hurley, who has good reason to be concerned about fake anonymous accounts after what happened in his constituency this summer, tabled a parliamentary question asking directly  “whether Ofcom's consultation on additions to the Illegal Content Codes will include those accounts.” The government’s answer was not quite so direct, but pointed towards the phase 3, “Category 1” route for introducing verification measures, albeit with an ever-so-slightly more optimistic interpretation of the timetable as “late 2025”. Our own private conversations with Ofcom officials have also indicated that they are doubling down on their view that the best place to consider user verification is in phase 3, for Category One platforms, even though phase 3 is being further delayed.


The human cost of a further 12 months of delay would be significant. UK users would face another year without options to block out hate and harassment from anonymous accounts. Organised crime would continue to operate networks of fake accounts to target UK users with scams, and extremists and hostile foreign states can continue to use networks of fake accounts to interfere in UK democracy or whip up division and hatred. The role which fake accounts played in spreading false information and whipping up hatred during the 2024 summer disorder could be repeated in summer 2025, or summer 2026.

 

There would also be a risk to the credibility of Ofcom, and of the Online Safety Act regime. Fake and anonymous accounts are one of the most well understood examples of a problematic design feature. User verification measures have long been found to be supported by the public as a way to reduce the risks - including in Ofcom’s own research. If UK users are continuing to experience the same problems with fake and anonymous accounts several years after the Online Safety Act came into force, it may bring into question whether the Act, or its enforcer, are fit for purpose. DSIT’s recently-published draft Statement of Strategic Priorities talks of a need to “work at pace”, and for Ofcom to prioritise “safety by design” - Ofcom may struggle to explain how a further considerable delay to measures which would address fake and anonymous fake accounts fits with this steer.

 

Ofcom has yet to make its final announced on the first illegal content codes, or on what further measures it intends to consult on next Spring. We’ll be watching those announcements carefully, and will continue to urge them to implement user verification measures without further delay.

Commenti


bottom of page