Clean up the Internet’s thinking about online anonymity, and the role its misuse plays in undermining online discourse, is informed by a wide range of academic research. Below is a non-exhaustive list of academic works exploring the relationship between anonymity, inauthentic accounts, lack of effective verification, and misinformation and disinformation. We make an attempt to summarise each piece, and include some short quotes.
Where a full version of the article is available for free online, we include a direct link. Where the article is pay-walled, we include the Digital Object Identfier.
We first published this list in October 2019, and it was last updated in January 2022. We’d hugely welcome other relevant research being brought to our attention.
Please see also our companion piece covering research relating to anonymity, online disinhibition, abuse, incivility and trolling.
Cloaked Facebook pages: Exploring fake Islamist propaganda in social media
Johan Farkas, Jannick Schou, Christina Neumayer
New media & society. Volume 20:Number 5 (2018); pp 1850-1867
This research analyses “cloaked” Facebook pages, created to spread political propaganda by imitating the identity of a political opponent in order to spark hateful and aggressive reactions. It looks at Danish Facebook pages disguised as radical Islamist pages, which provoked racist and anti-Muslim reactions as well as negative sentiments towards refugees and immigrants in Denmark in general.
The “cloaked pages” were inauthentic pages purporting to be from radical islamists, but actually authored by islamophobic provocateurs. They issued such provocations as calling for sharia law, mocking danish customs, celebrating images of burning danish flag. An example of a fake post read:“We Muslims have come here to stay. WE have not come here for peace but to take over your shitty infidel country”. On Facebook itself, these posts were widely shared, generating significant outrage and provoking significant expressions of hostility towards muslims on Facebook. The pages also received national media coverage, and provoked an expression of outrage from a danish member of parliament.
“Although all news media questioned the authorship of the Facebook pages, most Facebook users who shared or commented on these pages posts assume the originators were radical Islamists”.
“A key strategy for disseminating hate speech online is to hide or disguise the underlying intentions - both to avoid detection and appeal to a large audience”.
“The cloaked Facebook pages became sites of aggressive posting and reaction through comments, producing a spectacle of hostility. The page administrators created this hostility through new aggressive posts, and users maintained and reproduced this hostility through their reactions. The user-generated stream of information was based on aggressive and violent disinformation through the cloaked Facebook pages and fueled antagonistic reactions, contributing to neo-racism in Denmark."
News sharing on UK Social Media - misinformation, disinformation and correction survey report
Andrew Chadwick and Cristian Vaccari
University of Loughborough, 2019
The focus of this report is the habits and attitudes of UK social media users in relation to misinformation, based on public opinion research conducted by Opinium.
Some striking results include:
42.8 percent of news sharers admit to sharing inaccurate or false news
17.3 percent admit to sharing news they thought was made up when they shared it. These users are more likely to be male, younger, and more interested in politics.
A substantial amount of the sharing on social media of inaccurate or made up news goes unchallenged. Fewer social media users (33.8 percent) report being corrected by other social media users than admit to sharing false or exaggerated news (42.8 percent). And 26.4 percent of those who shared inaccurate or made up news were not corrected.
Those who share news on social media are mainly motivated to inform others and express their feelings, but more civically-ambivalent motivations also play an important role. For example, almost a fifth of news sharers (18.7 percent) see upsetting others as an important motivation when they share news.
The authors note that the behaviour of sharing an inaccurate piece of content online occurs in the same disinhibited context as other forms of social media interaction:
“In social media interactions, anonymity or pseudonymity are widespread, or people use their real identities but have weak or no social ties with many of those with whom they discuss politics. As a result, when interacting on social media, people are generally more likely to question authority, disclose more information, and worry less about facing reprisals for their behaviour. The fact that many social media users feel less bounded by authority structures and reprisals does not necessarily lead to democratically undesirable interactions. Social media environments encourage the expression of legitimate but underrepresented views and the airing of grievances that are not addressed by existing communicative structures. However, social media may afford a political communication environment in which it is easier than ever to circulate ideas, and signal behavioural norms, that may, depending on the specific context, undermine the relational bonds required for tolerance and trust."
Suspicious Election Campaign Activity on Facebook: How a Large Network of Suspicious Accounts Promoted Alternative Für Deutschland in the 2019 EU Parliamentary Elections
Trevor Davis, Steven Livingston, and Matt Hindman
George Washington University, 2019
This report contains a detailed analysis of the ways in which networks of suspicious facebook accounts promoted German Far-Right party Alternative Für Deutschland during the May 2019 EU parliamentary elections. It identifies extensive use of apparently inauthentic accounts, at a point when Facebook had claimed that this problem had been addressed.
The authors identify two distinct networks of inauthentic accounts. The first was used to create a false impression of credibility for AfD pages by artificially boosting their followers. Of the second, they write:
"The second network we identified is more concerning. It is a network comprised of highly active accounts operating in concert to consistently promote AfD content. We found over 80,000 active promotional accounts with at least three suspicious features. Such a network would be expensive to acquire and require considerable skill to operate. These accounts have dense networks of co-followership, and they consistently “like” the same sets of individual AfD posts. Many of the accounts identified share similar suspicious features, such as two-letter first and last names. They like the same sets of pages and posts. It is possible that this is a single, centrally controlled network. Rates of activity observed were high but not impossible to achieve without automation. A dexterous and determined activist could systematically like several hundred AfD posts in a day. It is less plausible that an individual would do so every day, often upvoting both original postings of an image and each repost across dozens of other pages. This seems even less likely when the profile’s last recorded action was a post in Arabic or Russian. Additionally, we found thousands of accounts which: - Liked hundreds of posts from over fifty distinct AfD Facebook pages in a single week in each of ten consecutive weeks. - Liked hundreds of AfD posts per week from pages they do not follow. Automated accounts are the most likely explanation for these patterns. The current market price of an account that can be controlled in this way is between $8 and $150 each, with more valuable accounts more closely resembling real users. In addition to supply fluctuations, account price varies according to whether the account is curated for the country and whether they are maintained with a geographically specific set of IP addresses, if they have a phone number attached to them (delivered with the account), and the age of the account (older is more valuable). Even if the identified accounts represented the entire promotional, purchasing this level of synthetic activity would cost more than a million dollars at current rates. Data collection from Facebook is limited, making it difficult to estimate the size of the network or the scale of the problem. Accounts in our dataset had persisted for at least a year."
“THE RUSSIANS ARE HACKING MY BRAIN!” investigating Russia's internet research agency twitter tactics during the 2016 United States presidential campaign
Darren L. Linvill,Brandon C. Boatwright,Will J. Grant,Patrick L. Warren
Computers in Human Behavior Volume 99, October 2019, Pages 292-300
This is a detailed study of the methods employed by the “Internet Research Agency”, an apparent arm of the Russian state, during the 2016 US presidential election. It describes the extensive use of false identities and anonymous accounts to disseminate disinformation. They detail fake accounts, run out of Russia, which purported to be local news sources with handles like @OnlineMemphis and @TodayPittsburgh. Others purported to be local republican-leaning US citizens, with handles like @AmelieBaldwin and @LeroyLovesUSA, and yet others claimed to be members of the #BlackLivesMatter movement with handles such as @Blacktivist.
“Here we have demonstrated how tools employed by a foreign government actively worked to subvert and undermine authentic public agenda-building efforts by engaged publics. Accounts disguised as U.S. citizens infiltrated normal political conversations and inserted false, misleading, or sensationalized information. These practices create an existential threat to the very democratic ideals that grant the electorate confidence in the political process."
"Our findings suggest that this state-sponsored public agenda building attempted to achieve those effects prior to the 2016 U.S. Presidential election in two ways. First, the IRA destabilized authentic political discourse and focused support on one candidate in favor of another and, as their predecessors had done historically, worked to support a politically preferred candidate (Shane & Mazzetti, 2018, pp. 1–11). Second, the IRA worked to delegitimize knowledge. Just as the KGB historically spread conspiracies regarding the Kennedy assassination and the AIDS epidemic, our findings support previous research (Broniatowski et al., 2018) that IRA messaging attempted to undermine scientific consensus, civil institutions, and the trustworthiness of the media. These attacks could have the potential for societal damage well beyond any single political campaign."
Falling Behind: How social media companies are failing to combat inauthentic behaviour online
Sebastian Bay and Rolf Fredheim NATO Strategic Communications Centre of Excellence (NATO STRATCOM)
November 2019
This report details a successful attempt by researchers to purchase inauthentic social media activity. The experiment was conducted between May and August 2019, and tested the platforms’ various claims that inauthenticity is largely a historical problem which they have now tackled. The authors conclude that the platforms’ claims to have tackled inauthentic activity have been exaggerated and that independent regulation is required. They write:
"To test the ability of Social Media Companies to identify and remove manipulation, we bought engagement on 105 different posts on Facebook, Instagram, Twitter, and YouTube using 11 Russian and 5 European (1 Polish, 2 German, 1 French, 1 Italian) social media manipulation service providers. At a cost of just 300 EUR, we bought 3 530 comments, 25 750 likes, 20 000 views, and 5 100 followers. By studying the accounts that delivered the purchased manipulation, we were able to identify 18 739 accounts used to manipulate social media platforms. In a test of the platforms’ ability to independently detect misuse, we found that four weeks after purchase, 4 in 5 of the bought inauthentic engagements were still online. We further tested the platforms ability to respond to user feedback by reporting a sample of the fake accounts. Three weeks after reporting more than 95% of the reported accounts were still active online Most of the inauthentic accounts we monitored remained active throughout the experiment. This means that malicious activity conducted by other actors using the same services and the same accounts also went unnoticed. While we did identify political manipulation—as many as four out of five accounts used for manipulation on Facebook had been used to engage with political content to some extent—we assess that more than 90% of purchased engagements on social media are used for commercial purposes. Based on this experiment and several other studies we have conducted over the last two years, we assess that Facebook, Instagram, Twitter, and YouTube are still failing to adequately counter inauthentic behaviour on their platforms. Self-regulation is not working. The manipulation industry is growing year by year. We see no sign that it is becoming substantially more expensive or more difficult to conduct widespread social media manipulation. In contrast with the reports presented by the social media companies themselves, our report presents a different perspective: We were easily able to buy more than 54 000 inauthentic social media interactions with little or no resistance. Although the fight against online disinformation and coordinated inauthentic behaviour is far from over, an important finding of our experiment is that the different platforms aren’t equally bad—in fact, some are significantly better at identifying and removing manipulative accounts and activities than others. Investment, resources, and determination make a difference. Recommendations: -Setting new standards and requiring reporting based on more meaningful criteria -Establishing independent and well-resourced oversight of the social media platforms -Increasing the transparency of the social media platforms -Regulating the market for social media manipulation
#IStandWithDan versus #DictatorDan: the polarised dynamics of Twitter discussions about Victoria’s COVID-19 restrictions
Timothy Graham, Axel Bruns , Daniel Angus, Edward Hurcombe and Sam Hames
Media International Australia 2021, Vol. 179(1) 127–148
This research by Australian academics looks at two interrelated hashtag campaigns targeting the Victorian State Premier, Daniel Andrews of the Australian Labor Party, regarding the Victorian State Government’s handling of the COVID-19 pandemic in mid-to-late 2020. They examine how a small number of hyper-partisan pro- and anti-government campaigners were able to mobilise ad hoc communities on Twitter and influence the broader debate.
The researchers examine 396,983 tweets sent by 40,203 accounts between 1 March 2020 to 25 September 2020 containing the hashtags “#IStandWithDan”, “#DictatorDan” or “#DanLiedPeopleDied”. This included a qualitative content analysis of the top 50 most active accounts (by tweet frequency) for each of the three hashtags, including an attempt to determine which accounts represented real, authentic users, and which were “sockpuppets” accounts, which they define as “an account with anonymous and/or clearly fabricated profile details, where the actor(s) controlling the account are not identifiable.”
The researchers identify significant numbers of “sockpuppets” across the debate, with 54% of the top 50 accounts (by tweet frequency) using the anti-government hashtags identified as “sockpuppets”, and 34% of those using the pro-government hashtag. In the case of the anti-government hashtag campaign, they find “evidence that the broader adoption and dissemination of language targeting Andrews is driven at least in part by coordinated and apparently inauthentic activity that amplifies the visibility of such language before it is adopted by genuine Twitter users.”
Notably, the researchers identify “only a vanishingly small number of likely bots”, which they define as “entirely automated” accounts. In other words, the inauthentic amplifying they identify is driven by real human beings exploiting the platforms’ ability to create anonymous accounts.
“A more likely explanation, and one also in keeping with our observations of the greater percentage of fabricated sockpuppet profiles among the most active accounts in the anti-Dan hashtags, is that the fringe activists promoting the #DictatorDan and #DanLiedPeopleDied hashtags have engaged in the deliberate creation of new, ‘fake’ accounts that are designed to generate the impression of greater popular support for their political agenda than actually exists in the Victorian population (or at least in its representation on Twitter), and to use these fabricated accounts to fool Twitter’s trending topic algorithms into giving their hashtags greater visibility on the platform. By contrast, the general absence of such practices means that #IStandWithDan activity is a more authentic expression of Twitter users’ sentiment.”
Overall, then, the flow patterns we observe with the anti-Dan hashtags should more properly be described as follows: - An undercurrent of antipathy towards the pandemic lockdown measures circulates on Twitter; - Mainstream and especially conservative news media cover the actions of the Victorian state government from a critical perspective; - Some such reporting is used by anti-Andrews activists on Twitter to sharpen their attacks against Andrews (see, for example, the Yemini tweet shown in Figure 4), but in doing so, they also draw on pre-existing memes and rhetoric from other sources (including the Sinophobic #ChinaLiedPeopleDied), and adapt these to the local situation; - Such rhetoric is circulated by ordinary users and their hyper-partisan opinion leaders on Twitter, amplified by spam-like tweeting behaviours and purpose-created sockpuppet accounts, and aggregated by using anti-Dan hashtags such as #DictatorDan and #DanLiedPeopleDied as a rallying point; - This content is in turn directed at news media, journalists, and politicians (as Table 1 shows) in the hope that it may find sympathy and endorsement, in the form of retweets on Twitter itself or take-up in their own activities outside of the platform (including MP Tim Smith’s Twitter poll, in Figure 1); - And such take-up in turn encourages further engagement in anti-Dan hashtags on Twitter, repeatedly also pushing them into the Australian trending topics list”
Influencers, Amplifiers, and Icons: A Systematic Approach to Understanding the Roles of Islamophobic Actors on Twitter Lawrence Pintak, Brian J. Bowe, and Jonathan Albright
Journalism & Mass Communication Quarterly, July 2021 https://doi.org/10.1177%2F10776990211031567 This study analyses the anti-Muslim/anti-immigrant Twitter discourse surrounding Ilhan Omar, who successfully ran for Congress in the 2018 US midterm elections.
The research examines the clusters of accounts posting tweets that contained
Islamophobic or xenophobic language or other forms of hate speech regarding Omar and her candidacy. They identify three categories of Twitter accounts - “Influencers”, “Amplifiers”, and “Icons” - in the propagation of Islamophobic rhetoric and explores their respective
Roles.
“Influencer” accounts were defined as those linked to the anti-Omar Islamophobic/hate speech content which were scored highly by the PageRank algorithm, a link analysis algorithm widely used to assess the influence of webpages. “Amplifier Accounts” were defined as those who ranked highly when measured by weighted-out degree, i.e. by the sum of their retweets, replies, tags and mentions which linked Islamophobic/hateful content back to Omar. “Icons” were defined as accounts with the most followers, generally high profile figures e.g. celebrities, politicians, sport stars, or accounts linked to major news organisations.
The researchers found that “Influencer” accounts were generally authentic and identifiable. The top influencer accounts helped shape the discourse, producing large quantity of original material. For example the account of a professional conservative provocateur, @LauraLoomer, “dominated the Islamophobic Twitter narrative around Omar” and “seeded the narrative with posts that were widely retweeted.”
The most significant “Amplifier” accounts, on the other hand, were found to be mostly inauthentic. Of the top 40 Amplifiers spreading Islamophobic/xenophobic messages linked to Omar’s election campaign network, the researchers determine that only 11 were authentic accounts.
The "Icon" accounts had an impact on the discourse through the size of their follower account, despite a very low number of tweets about Omar. The researchers conclude that they "played virtually no role in the overarching anti-Muslim narrative of these two candidates".
In other words, the Islamophobic/xenophobic discourse was largely driven by a "handful of Influencers— in this case, agents provocateurs— [who] were responsible for authoring, or giving initial impetus to, the majority of the offensive tweets", who were mainly not anonymous. This information was "then relayed to the broader Twitter universe by a larger, but still finite, network of Amplifiers, many of which were either identified as a form of bot or showed signs of the kind of “coordinated inauthentic activity” that characterise bots."
"These inauthentic accounts represent hidden forces, which have a real effect on the discourse, serving as automated megaphones that, in the case of anti-Muslim and xenophobic hate speech, transform the Twitter “dialogue” into a one-way monologue of hate. Together, these shadowy accounts function to poison the political narrative, drawing in both likeminded and unsuspecting individuals who retweeted their posts, disproportionately amplifying—and, for some, normalizing—the message of intolerance"
"Just because a large proportion of the tweets studied here were artificial does not mean they were inconsequential. Rather, they played an important role in distorting online civic discourse, in part when journalists and interested members of the public interacted with this material."