top of page
Search
  • Writer's pictureDavid Babbs

"Rocket boosters" for extremists - how social media fuelled recent far-right disorder in the UK, and some suggestions for how to prevent this in the future

It’s been widely recognised that social media was a significant contributory factor in the UK’s recent spate of far-right riots. The Home Secretary, Yvette Cooper, told the BBC Breakfast on 5 August that “Social media acted as a rocket booster behind both the spread of misinformation and the organisation of this violence”. The general public appear to agree - a YouGov poll found that over 70% of Britons believe social media companies did a bad job at tackling misinformation during the riots.


Whilst the government’s immediate focus has understandably been on actions using existing laws to prevent further disorder, understanding how these “rocket boosters” work, and taking action to remove access to them in the future by far-right and other extremist groups, is surely just as important in the longer term.


Clean Up The Internet has conducted some initial analysis of how social media fuelled the riots. We find that over the long term social media platforms have enabled shadowy groups to build audiences receptive to extreme ideas and narratives, connective tissue between individuals who share a far-right worldview in the form of loose digital networks formed through social media pages, groups, and channels. This created a context in which social media’s short-term impact - spreading the lies that triggered the violence, and providing tools to organise riots – was all the more devastating. What follows clearly applies to far right groups in light particularly of the riots but is equally applicable to other extreme and unscrupulous political groupings.


We identify some design features common to social media platforms, which have been particularly important to the far-right. These include:


  • Lax content moderation which allowed inflammatory false content, hate speech and incitement to be published, with platforms either failing to remove it at all or removing it only after it had spread significantly - such as the claim that the Southport killer was a muslim, an illegal immigrant, and was called Ali Al-Shakati.

  • Platform recommendation systems, which scaled up the reach and impact of false, hateful and inflammatory material, by promoting them to more users via features such as news feeds, “for you” and “trending in the UK”, and “top comments”.

  • Anonymous and fake accounts. The ease with which networks of fake accounts can be created enables the rapid seeding and spreading of disinformation and divisive content, whether by far-right groups, or foreign states wishing to sow discord. At the same time, the ability to post anonymously increased impunity for authentic far-right activists, with anonymous accounts able to promote disorder with a lower risk of being traced by the police.

  • Private groups/channels/pages, which served to draw more users into the far-right ecosystem, creating extremist echo chambers. The closed nature of the groups made it harder to monitor what was happening, and provided spaces where targets/locations, and timings for protests and violence could be shared.

  • Access to accounts for high-reach, far-right “influencers” with a record of posting hateful and inflammatory content and disinformation. X has drawn the most criticism, for decisions to re-platform far right figures that had previously been banned in 2019, such as “Tommy Robinson” and Andrew Tate. But smaller, less mainstream platforms, have continually hosted such figures, and other mainstream platforms such as Facebook, Instagram, YouTube and TikTok have all been found to host accounts with large followings with a record of sharing extreme right content.


We find that these design features have been exploited by a range of bad actors, including:


  • UK-based social media users with a racist/anti-immigration/far-right agenda

  • Non-UK based social media users with a racist/anti-immigration/far-right agenda

  • Non-UK based networks of accounts linked to foreign governments, with an agenda of interfering in UK politics/society

  • Non-UK based networks of accounts with an agenda of making money through monetising fake/junk news through advertising


We set out a number of actions which it would be perfectly possible for platforms to be taking right now to address these risks. However, it should be clear by now, to government and regulators, that platforms will not take adequate steps voluntarily and that regulatory action is required.


We therefore consider how these problems can be tackled through regulation. We find that some could be tackled under the 2023 Online Safety Act - or at least could be if the regulator was willing to move more quickly and to make full use of the powers which it has been given.


Given that in practice the regulator has been so slow and cautious, we suggest that the new Secretary of State, Peter Kyle, issue a new, clear, strategic steer to up the pace and ambition of its enforcement. We suggest he pushes Ofcom to take actions such as:


  • Strengthening their first versions of the Illegal Content Codes of Practice to include more measures targeting foreign interference and hate crimes, and mitigating the harms associated with “stand out” risk factors such as anonymous and fake accounts

  • Making use of their new information gathering powers to launch an investigation into triggers and enablers of the disorder, including an assessment of different platforms’ responses and the effectiveness with which they’ve removed illegal content and/or applied their own rules

  • Expediting the launch of the Committee on Disinformation, of which very little has been heard since the OSA was passed

  • Revisiting the proposed Criteria for “Category One” services, to ensure that more of the OSA’s protections are applied to smaller, higher risk, platforms which are an important component of the far-right’s toolset


However, whilst these actions would undoubtedly help, we consider them unlikely to address fully the problems with social media platforms which the recent disturbances have highlighted. A major reason for this is that the previous Conservative government significantly watered down the Online Safety Act during its passage. This included weakening its ability to address disinformation, or the cumulative harmful impact of non-criminal content, or to require platforms to make themselves safe by design. This was, rightly, strongly criticised by Labour at the time.


That means that alongside making full use of the existing legislative framework, it is essential that the Labour government urgently review what further legislative measures may be needed in the medium term. Our report suggests this could include considering whether additional legislation is needed to:

  • Place an overarching duty (as was the original intent of those who proposed the OSA) on platforms to make their platforms safe by design, and to address design features (such as recommendation systems or fake accounts) which create risk, rather than relying too heavily on content moderation

  • Bring disinformation and content which is harmful due to its cumulative effect more fully into scope of the Act

  • Ensure rules are sufficient for smaller-but-high-risk platforms

  • Improve transparency and access to data for independent researchers

  • Remove the “safe harbour” for platforms which are unsafe and causing harm, whether or not they are following the letter of Ofcom’s codes of practice

  • Clarify that the purpose of the Act’s “User Identity Verification Duty” is to reduce harm from fake and anonymous accounts, and that Ofcom must set minimum standards for how it is implemented to ensure it fulfils this purpose

  • Give Ofcom sufficient powers to require that platforms address risks arising from their use of recommender algorithms and associated content recommendation features

  • Enable Ofcom to set minimum standards for platforms’ Terms of Service, as opposed to simply requiring that platforms enforce whatever they’ve chosen to adopt, which fails to account for circumstances where, say, a billionaire malign actor is running a platform

  • Give Ofcom additional, court-supervised powers to act in emergency situations – such the riots – including to compel companies to implement specific measures on a temporary basis to tackle specific threats to public order or public safety.



The full report is available here:



Comments


bottom of page