top of page
Search
Writer's pictureStephen Kinsella

Some reflections on Freedom of Expression, regulation of social media platforms, and anonymity

Freedom of Expression is a fundamental Human Right. Social media platforms can contribute greatly to Freedom of Expression, by enabling open exchange of ideas and information at an unprecedented scale. Clean Up The Internet is particularly interested in the huge contribution which social media can make to the health of the public sphere and democracy. However, we are concerned that in practice, at present, Freedom of Expression on social media faces a number of significant threats. Threats to Freedom of Expression, and a serious unevenness in the extent to which Freedom of Expression is currently enjoyed by different individuals, are important facets of a worrying degradation of online democratic debate.


So far, the most widely identified and discussed challenge to Freedom of Expression on social media relates to the removal of content, and to bans or suspensions of high profile individuals. The removal of Donald Trump from all the largest platforms is the latest example. Clean Up The Internet agrees that removals like these raise important concerns. The decision to remove content or ban an individual from a debate is a serious step. Whilst in the case of Donald Trump there’s plenty to justify such actions, the terms and conditions cited to justify such bans are not applied transparently or consistently. More fundamentally, significant decisions of this nature should not sit entirely in the hands of unaccountable (and in the case of the UK public sphere, foreign-owned) corporations.


We therefore welcome the proposals within the government’s Full Response to the Online Harms White Paper, to introduce regulatory oversight of the largest platforms’ T&Cs, and more routes of redress for users on the receiving end of sanctions. We await further detail of how this oversight will work in practice, in particular how Ofcom will assess the adequacy of those terms and conditions and ensure they are comprehensive - but these are definitely steps in the right direction.


However, being banned, or having your content removed, is only one form of exclusion from social media which currently threatens Freedom of Expression. Individuals from all walks of life, but disproportionately those from marginalised or vulnerable groups, are currently less able to participate and express themselves freely on these same platforms, due to high levels of online intimidation and abuse. This is having a significant silencing effect. This restriction on Freedom of Expression may be less visible than that which arises due to an active decision to remove a piece of content or ban a user. However, it is at least as serious, and the evidence suggests it affects more people.


A study conducted by Glitch and the End Violence Against Women Coalition during summer 2020 found that abuse directed at women appears to have worsened during the pandemic. An overwhelming majority of respondents said they had modified their behaviour online following incidents of online abuse, with as many as 82.5% of Black or minoritised respondents reporting this impact. Research by Amnesty International in 2018 included polling which found that 78% of British women didn’t believe Twitter to be a place where they could share their opinion without receiving violence or abuse. In 2019 several female MPs cited social media abuse as a factor in their decision to step back from politics


Some users’ “free speech”, and the way in which the design and operation of platforms amplifies such speech, can have the cumulative impact of excluding or silencing other users. This is manifestly detrimental to the Freedom of Expression of affected individuals. It’s a particularly serious problem on the largest platforms, because these are important parts of the public sphere, and because the disproportionate impact on vulnerable groups perpetuates their under-representation in democratic debate.


We therefore think it’s important that the forthcoming Online Safety Bill aims to protect and enhance Freedom of Expression online for everyone. The means attention needs to be paid to the tricky questions and trade-offs posed by the way one person’s “freedom” can adversely impact on others. Large platforms should be required by the regulator to act with a view to ensuring that Freedom of Expression is enjoyed equally - including by those with protected characteristics.


In part, this will have to be addressed through the way in which decisions regarding the removal of individuals and/or content are regulated. The largest platforms should be required to demonstrate to the regulator that they are taking a rounded approach to Freedom of Expression, with due regard to the impact of not acting as well as of acting, and with due regard to the impact on the Freedom of Expression of potential users or those on the receiving end of harmful content.


However, it should also be possible to reduce the need for these difficult trade-offs which arise from content or user removal by focusing more closely on the overall design and operation of social media. Tackling risk factors which fuel harmful behaviour in the first place, or strengthening other users’ abilities to protect themselves from it, would reduce the need for reliance on moderation or user bans with their inherent trade-offs.


Social media platforms’ current laissez-faire approach to anonymity and identity deception demonstrates how redesigning social media to reduce risk factors could lessen reliance on content moderation. At present, misuse of anonymity is a major contributory factor to the abuse, trolling, bullying, harassment, and hateful disinformation - which then in turn harm other users’ Freedom of Expression. Ofcom could require platforms to take a more proactive approach to preventing misuse of anonymity, alongside protecting its legitimate uses.


We suggest that this should include measures such as offering all users a meaningful option for verifying their identity; making it transparent to all users who is verified and who isn’t; and giving verified users more options to manage the level of interaction they have with unverified users, including the ability to block them as a category. Design-level changes of this nature would reduce the silencing effects of harmful behaviour, without removing a single additional piece of content or user. Important legitimate uses of anonymity, such as for example a whistleblower, or someone fleeing domestic abuse, would be protected. You can read more about how misuse of anonymity currently fuels online harms, and how this could be tackled, in this report which we published last year.


Clean Up The Internet will be making the case for measures such as these to be within the scope of the Online Safety Bill as it passes through parliament in the coming months. It’s welcome that multiple parliamentary committees are currently considering issues relating to Freedom of Expression online. We have recently made submissions to the Joint Committee on Human Rights and the House of Lords Communications and Digital Committee. You can read our full submissions here.


Submission to House of Lords Digital and Communications Committee:


Submission to the Joint Committee on Human Rights:



コメント


bottom of page