“Blue Ticks”, user verification, impersonation, identity concealment and deception have been prominent themes of Musk’s first few weeks in charge of Twitter. These themes are very relevant to Clean Up The Internet’s work, and to the current draft of the Online Safety Bill, given its inclusion of User Verification and User Empowerment duties.
Here we will attempt to summarise the main developments to date, and share some first reflections on potential lessons for the OSB. In our view Musk’s rather erratic approach has reinforced the importance both of the OSB as a whole, and of its specific proposals to set new expectations on platforms regarding verification. Musk has also, inadvertently, provided a helpful test of whether the Bill’s wording sets clear enough principles and minimum standards for how platforms comply.
1. What has happened so far?
Questions about the prevalence of inauthentic accounts, and the implications this might have for the gap between Twitters’ claimed and genuine user numbers, were cited as a key sticking point during the delays and disputes which preceded the takeover. After the acquisition was completed, Musk was immediately critical of Twitter’s previous approach to verification and Blue Ticks, and then introduced a series of changes in a rapid, and chaotic, manner.
Musk described Twitter’s long-established approach to verification as a “Lords and peasants system”. He floated a plan to drop Twitter's practice of restricting access to a Blue Tick, to indicate that an account’s identity had been “verified”, to just a tiny minority of “notable” users. Instead he indicated an intention of “rolling out widespread verification”, describing this as “power to the people”. He proposed to do this by bundling access to a Blue Tick with a revamped Twitter premium subscription service, “Twitter Blue”. Initially he proposed to charge £20 per month, but then after some online haggling with the author Stephen King settled on $8 per month
Within days, a series of changes to Twitter’s processes were implemented. A wider range of users were offered access to a Blue Tick, via a subscription to Twitter Blue - although the new options were limited to users with Apple devices who could subscribe through the IoS Twitter app. The process of subscribing and paying for Twitter Blue did not change - no additional processes or checks were introduced to confirm the identity of a subscriber.
This meant that in practice the meaning of the Blue Tick had changed. Previously it had indicated that Twitter considered an account to be authentic, and that (albeit in a somewhat opaque manner) Twitter had taken steps to satisfy itself of this. Now it simply indicated Twitter was in receipt of a payment of $8, for a month’s subscription to Twitter Blue. This change in the meaning of the Blue Tick was not acknowledged or publicised - on the contrary, Twitter described accounts in possession of the new version of the Blue Tick as “verified because it’s subscribed to Twitter Blue”. To add to the confusion, accounts that had acquired a Blue Tick under the previous system, i.e. on the basis of notability and authenticity, retained their Blue Tick.
Conflating a symbol which indicated authenticity and notability with merely reflecting payment of an $8 subscription opened up opportunities for bad actors wishing to exploit the situation, and pranksters wishing to highlight its absurdity. Inauthentic accounts sporting a Blue Tick appeared, impersonating a vast range of figures and brands, including Pope Francis, George W Bush, Elon Musk himself, Nintendo, Lockheed Martin, and Musk’s own companies Tesla and SpaceX.
Musk announced a rapid succession of new policies and initiatives in response to the proliferation of high profile inauthentic accounts with Blue Ticks. These included:
Threatening an immediate “ban” for any account engaged in impersonation - although there was no evidence of this being consistently enforced, and presumably any efforts to enforce it would have been made more difficult due to the extensive lay-offs of support staff that Musk had also implemented.
Introducing a note indicating whether an account held a “legacy” Blue Tick based on authenticity and notability, or was a Twitter Blue subscriber, visible if a user clicked on another users’ tick.
Introducing a second “Grey Tick”, to indicate whether or not a notable account was “official” - a series of contradictory statements were issued as to whether they would be rolled out, and to whom. At one point accounts which had previously had a “notable and authentic” Blue Tick, and were now subscribed to Twitter Blue (e.g. Elon Musk) had both a Grey Tick and a Blue Tick.
Suspending the availability of Twitter Blue and therefore of new Blue Ticks.
On Thursday 10th November, @EliLillyandCo, an account sporting a Blue Tick, and purporting to be the account of US Pharmaceutical Company Eli Lilly and Company, announced that it would be ceasing to charge for its insulin products. This forced the company to issue a correction and apology. Over the following day, its stock price fell over 4%.
On Friday 11th November, Twitter Blue stopped being available to new users. On Monday 13th November, Musk appeared to confirm that it had been suspended pending further changes, tweeting that he was “Punting relaunch of Blue Verified to November 29th to make sure that it is rock solid”. A further tweet on November 22nd, stating that he was “holding off relaunch of Blue Verified until there is high confidence of stopping impersonation”, suggested further delays.
At time of writing, there is considerable confusion and inconsistency. A Blue Tick can signify either an account which Twitter once considered authentic and notable, under the legacy, pre-Musk system. Or it can signify an account that subscribed to Twitter Blue in the fortnight before the feature was suspended. Some “notable” accounts, including some lawmakers, are reporting having their ticks rescinded for having failed to subscribe to Twitter Blue. Some “notable” accounts have retained a second “official” Grey Tick, but there appears to be little consistency here. For example amongst UK political journalists ITV’s Robert Peston has an “official” Grey Tick and a Blue Tick, whilst the BBC’s Laura Kuennsberg has only a legacy Blue Tick.
2. What might it mean?
What might all of this mean for those of us, like Clean Up The Internet, who have been calling for action from platforms to reduce misuse of anonymous accounts, by offering verification options to all of their users? Some might be tempted to cite the recent chaos on Twitter as evidence that we’re wrong. After all, some of the changes Musk claimed to be introducing bore at least some superficial resemblance to the measures we have been advocating. We think this would be to misread the situation, for several reasons.
Most crucially, Musk’s Twitter wasn’t actually offering its users a genuine option to verify their identity. As the New York Times put it, having seen “internal documents” in advance of the scheme’s launch, “subscribers would not need their identities authenticated to get the check mark”. All that the Blue Tick reliably signalled after Musk’s changes was that Twitter had received a payment of $8. There was not any requirement that the payment card details match those of the account name or handle. The Blue Tick had previously signified that an account holder's identity had been authenticated by Twitter - but Musk was now offering any user the opportunity to purchase the symbol, without the authentication.
Furthermore, there weren’t even effective retroactive systems and processes in place. Musk was prompted, apparently by users impersonating “Elon Musk” to a zero-tolerance policy towards impersonation. On 8 Nov Twitter’s then Head of Trust and Safety Yoel Roth claimed of impersonators, that “when we find them, we’ll suspend them. See something that looks off? You can report it directly in the app”. This never seemed plausible given the huge reduction in staff capacity which could have looked into such reports, and 2 days later, on 10 Nov, Yoel Roth had also left the company.
In adopting the formulation that an account should be considered “verified because it’s subscribed to Twitter Blue”, Musk was continuing an established Twitter practice of playing fast-and-loose with definitions of “verification”. He was using the term inconsistently, and often in ways which diverge significantly from common understandings. We documented this previously, following abuse of England footballers after the Euros 2020 final, when Twitter claimed that ““ID verification would have been unlikely to prevent the abuse from happening” as 99% of the abusive accounts were “not anonymous”. We established that to make this claim, Twitter had classified any account associated with any email address or phone number as “verified” - so an account with the name “Mickey Mouse” and the email address mickeymouseisnotreallymyname@gmail.com was classified as having been “verified”.
It seems pretty clear that Musk’s Twitter did not conduct any kind of risk assessment, or seek any advice on the safety implications or potential implications for different categories of users, before pushing ahead with the changes to the Blue Tick. They certainly did not publish any such assessments or open any such thinking up to public scrutiny. Yoel Roth made claims about how adding an $8 fee added “proof of humanness”, and that this would “raise the cost” for spammers, making them “go away”. Yet in reality, as WIRED reported, a situation where “anyone can get a blue tick on Twitter without proving who they are” made Twitter a “scammers’ paradise”.
Scammers with Blue Ticks quickly began impersonating companies’ customer services departments and inviting users to share details with them. Disinformation expert and Bellingcat founder Eliot Higins, who probably knows a bit more about how disinformation networks work than either Musk or Roth, observed that “adding a verified badge to accounts pushing disinformation on a large scale on behalf of state actors makes $8 per account extremely good value for money.”
Fundamentally, for all the disruptiveness, Musk’s changes to the Blue Tick were executed in a way which displayed a high level of continuity with the way design decisions at big social media platforms have usually been taken over the past two decades. A “move fast and break things'' approach was adopted by a leadership which failed to adequately consider risks. The personal preoccupations and experiences of the leadership obscured the needs of a diverse global user base. A desire to create a new revenue stream, by monetising the cachet of having a Blue Tick, trumped any consideration of potential societal impacts. Claims were issued about the safety benefits of the new approach, without any evidence offered to back them up. Policies against bad behaviour were announced, without the processes, systems or staff capacity to enforce them consistently.
Such an approach might be effective and appropriate for a small start-up. It is not suitable for a platform with hundreds of millions of users, and demonstrable potential to cause harm at scale, whether by destabilising democratic processes and institutions, spreading dangerous disinformation, or enabling hateful attacks on minorities. Whether it’s the latest reckless changes to Twitter’s Blue Tick, or the damning findings from the Inquest into the death of Molly Russell, leaving platforms to set their own standards has been shown repeatedly to lead to unacceptable outcomes.
Musk appears keen to double down on this failed approach, to move faster and break even more things. That surely reinforces the importance and urgency of proportionate, independent regulation to require the leadership and ownership of all the platforms to take a more responsible approach. In other words, it reinforces the need for the UK government to press on with the Online Safety Bill.
We expect that parliamentarians will soon recommence their consideration of the Online Safety Bill. When they do so, we hope they will recognise that Musk’s Blue Tick experiments are a timely reminder that the new regulatory regime must set minimum standards for how platforms approach user verification. The Online Safety Bill envisages increased use of both Age Assurance/Verification and Identity Verification. In both cases, to ensure they deliver the intended benefits for end users, we must not leave it to the likes of Elon Musk to define what an acceptable process looks like.
We have welcomed the inclusion of a User Verification Duty (s57) and User Empowerment Duty (s14) within the current draft, but have raised some concerns that the wording could do with some tightening up to ensure that platforms offer their users verification options which are meaningful, accessible, and privacy respecting.
We have suggested that User Verification should be defined, to ensure that platforms don’t play fast-and-loose with how they choose to define it - as Twitter has done, both before and after Musk’s takeover. We have suggested that Ofcom be given a clearer set of instructions, in s58, as to what their guidance to platforms should cover, including principles (such as accessibility, user rights, the promotion of choice and competition) that Ofcom should consider; minimum standards (such as for effectiveness, privacy and security) which Ofcom should set; and other bodies which Ofcom should consult in drawing up such guidance. The full wording of our suggested amendments can be read here.
Musk has inadvertently provided parliamentarians with a test for the legislation: would Musk have been able to claim that selling "Blue Ticks" for $8 satisfied the requirements of the User Verification Duty? Or would Ofcom have had the powers to challenge him and insist on an approach to user verification that prioritised trust, authenticity and safety for end users? If there’s any doubt as to the answer to that question, then adopting amendments along the lines which we suggest could help remove it.
Comments