top of page
Search
Writer's pictureStephen Kinsella

First thoughts on yesterday’s announcement re: Online Harms

We welcome the government’s full response to the Online Harms Consultation, which was finally published yesterday afternoon. Whilst we are still awaiting important details, and the exact timetable for the legislation, it is encouraging that the government has signalled its intention to bring the era of self-regulation by the large social media platforms to an end. The platforms have had long enough to clean themselves up voluntarily and have consistently failed to do so. Self-regulation has not worked, and the result has been harm to many individuals and a damaging degradation in the quality of online discussion and debate.


We’re pleased that Ofcom has finally been confirmed as the intended new regulator, allowing them to prepare for the task. And we’re pleased that the regulator will be focused on the platforms’ design, systems and decision-making processes, rather than seeking to regulate individual pieces of content.


We think the proposal for two “tiers” of regulation, with higher, “category 1” obligations reserved for the biggest platforms with the biggest reach, could make sense. Those platforms which are, in practice, an important part of the public sphere, should be held to the very highest standards. If a marginalised group is excluded or silenced on one of these platforms, this is more serious because they are being excluded from part of the democratic public sphere. If a debate on one of these very large platforms is distorted by disinformation, that is more serious because this distortion reaches people at greater scale and risks distorting the democratic process. However, we have questions about the criteria for deciding which platforms are in “category 1”, and a concern that those in “category 2” should still be held to a reasonably high standard.


We have big questions about the reliance on social media companies’ own terms and conditions when it comes to addressing “legal but harmful” content. Secretary of State Oliver Dowden argues that giving Ofcom powers to ensure that platforms are consistently applying their T&Cs means an end to them “marking their own homework”. However, this is not meaningful unless there is also proper regulatory oversight of the standard of these T&Cs. The process of defining which “legal but harmful” harms the largest platforms must address would need to be sufficiently ambitious and underpinned by evidence and expertise. Ofcom would need to be given robust powers to assess the rigour and scope of these T&Cs, and to require improvement should they fall short – otherwise we will have simply shifted from the platforms “marking their own homework” to them “setting their own homework” instead.


We welcome the emphasis on the importance of safeguarding freedom of expression. Requiring platforms to implement their moderation policies consistently and transparently, and to offer users a genuine right of appeal, all has the potential to improve the quality of moderation, and to improve public trust. The one weakness in the current approach to freedom of expression is that consideration appears only to be given to safeguarding the freedom of expression of a platform’s current users. In the case of the “category 1” platforms, surely there is a case for a more rounded consideration of the impact of their decisions on freedom of expression more generally – included who is able to participate on the platform and who is excluded or silenced. To be excluded from such a platform, due to the prevalence of, say, sexist abuse, has freedom of expression implications.


As Oliver Dowden announced his plans in parliament yesterday, MPs from all the major parties raised with him the question of how the proposals would address the ways in which anonymity and identity deception can enable and fuel online harms. Several MPs gave specific examples of how they themselves, or their constituents, had directly experienced the link between anonymity and abuse. Whilst the Secretary of State promised to “keep an open mind” on this issue, it was disappointing that he didn’t go further and accept that the current laissez-faire approach needs to change. There’s overwhelming evidence that misuse of anonymity drives online harms – but there’s also a strong case that these could be tackled without “banning” the legitimate uses of anonymity which Mr Dowden highlighted.


Anonymity and identity deception is a root problem which fuels many of the other harms which the government is seeking to tackle – from fake accounts acting to amplify disinformation, to fake product reviews boosting online scams, to anonymous trolls launching racist or misogynist abuse. Platforms’ T&Cs are much less enforceable when, in the absence of any verification measures, a banned user can simply return with a fresh account. All this might mean Oliver Dowden doesn’t need to explicitly accept the case for tackling abuse of anonymity at this stage – because any regulator worth its salt would identify this as something which platforms need to be managing better. However, it would be much simpler if he were to acknowledge it as exactly the kind of risk factor which effective regulation should be seeking to address. From that point of view, the sooner the bill is published and pre-legislative scrutiny can begin, the better.

Comments


bottom of page