Last week Glitch and the End Violence Against Women Coalition published a powerful new report, which should be essential reading for anyone interested in understanding or tackling online harms. “The Ripple Effect: Covid-19 and the Epidemic of Online Abuse” documents the gendered and intersectional nature of online abuse. It provides evidence that the problem has worsened during the pandemic and associated lockdown.
The full report is available to download here. It presents the results of a survey which was conducted online in June and July 2020. The survey gathered the real-life experiences of women and non-binary people of being online during the first few months of the lockdown. It then makes recommendations informed by this collected experience.
Key findings include:
Almost 1 in 2 (46%) women and non binary people reported experiencing online abuse since the beginning of COVID-19
1 in 3 (29%) of those who had experienced online abuse prior to the pandemic reported it being worse during COVID-19
84% of respondents experienced online abuse from strangers – accounts that they did not know prior to the incident(s). Most of the abuse took place on mainstream social media platforms (Twitter 65%, Facebook 29%, Instagram 18%) despite tech companies’ commitments to making their platforms safe and addressing gender-based and intersectional abuse
Gender was the most often cited reason for online abuse, with 48% of respondents reported suffering from gender-based online violence
An important additional insight from the survey is the impact which online abuse has on the freedom of expression of those on the receiving end. An overwhelming majority of respondents said they had modified their behaviour online following incidents of online abuse, with as many as 82.5% of Black or minoritised respondents reporting this impact. Those who opposed to greater regulation of tech platforms often raise concerns that efforts to tackle online abuse could pose a threat to freedom of expression. Whilst these concerns are legitimate, this survey demonstrates clearly that a failure to tackle online abuse also poses a threat to freedom expression - and that this threat lands disproportionately on women and minoritised groups.
The survey claims to have been one of the largest of its kind, and gathers together the experiences and testimony of almost 500 women and non binary people. A major advantage of this approach is that it puts real experiences of real people front and centre, and these experiences are then able to inform the recommendations. The obvious limitation, with any such online survey, is that it’s a self-selecting group.
The report’s authors don’t shy away from this limitation – they are very clear that whilst their study is one of the biggest of its kind, research in this area requires far more investment, and far more transparency from the tech companies. Tech companies may be tempted to question Glitch’s findings by questioning how representative they are – but such objections would be more plausible if the tech companies were willing to provide better data themselves on the true scale of online abuse, and how it impacts different minoritised groups. In any case, the findings from this survey are very much in line with findings of previous studies conducted using different methodologies – for example Amnesty International's “Toxic Twitter” report of 2018.
Having presented the findings of the survey, which paint a worrying picture of the scale and impact of online abuse and its disproportionate impact on already minoritised groups, the report goes on to make a range of recommendations covering employers, government, tech companies and civil society. The range of recommendations reflects the fact that there’s no one “magic bullet” solution, and that instead a range of action is needed by a range of actors.
The focus on employers is notable. Glitch make a convincing argument that exposure to online abuse should be considered by employers as a workplace safety issue. This is of even greater importance during the pandemic with many workplaces now more reliant than ever on virtual communication and home working.
The report's recommendations to Tech Companies are all perfectly reasonable, but the authors could perhaps say a little more about why it is that Tech Companies have failed to take such apparently reasonable steps up until now. Clean Up the Internet would support voluntary action from Tech Companies of the kind set out by Glitch. However, Tech Companies have presided over unacceptable levels of abuse on their platforms for a very long time now, despite the vast resources at their disposal. In practice it’s hard to imagine them cleaning up their act without some form of government regulation to compel them.
It therefore makes sense that the report devotes the greatest space to recommendations to government – employers and tech companies are much more likely to step up if government creates the right framework. Most striking of all these recommendations is the suggestion that online abuse be treated as a public health problem, and therefore implement “a comprehensive public health approach to tackling online abuse”. Hopefully Glitch will expand on this idea in future reports as it could have considerable potential.
The proposal to treat online abuse as a public health issue is potentially complimentary Carnegie UK’s proposals for a “duty of care” for tech platforms. It also feels complimentary to the work which Clean up the Internet has done, looking at the role the design choices taken by platforms in enabling - or discouraging- online abuse. We have found substantial evidence that tech platforms’ current failure to manage identity concealment and anonymity currently leads to these features being misused, and being common drivers of online abuse. Glitch’s finding that many survey respondents experienced abuse via social media platforms, from “accounts that they did not know prior to the incident”, is consistent with our own findings.
The challenge with all reports of this sort is ensuring that the evidence and analysis drives real world change. In that respect the report is well timed, as it can only help add to the growing pressure for the government to press on with its Online Harms agenda. It also coincides with a live conversation amongst our neighbours in Europe about similar measures. We look forward to working with Glitch to influence these processes.
Comments