Authors:
(1) Amin Mekacher, City University of London, Department of Mathematics, London EC1V 0HB, (UK) and this author contributed equally;
(2) Max Falkenberg, City University of London, Department of Mathematics, London EC1V 0HB, (UK), this author contributed equally, and Corresponding authors: max.falkenberg@city.ac.uk;
(3) Andrea Baronchelli, City University of London, Department of Mathematics, London EC1V 0HB, (UK), The Alan Turing Institute, British Library, London NW1 2DB, (UK), and Corresponding authors: abaronchelli@turing.ac.uk.
Table of Links
Acknowledgements, Data availability, and References
Deplatforming, or banning malicious accounts from social media, is a key tool for moderating online harms. However, the consequences of deplatforming for the wider social media ecosystem have been largely overlooked so far, due to the difficulty of tracking banned users. Here, we address this gap by studying the ban-induced platform migration from Twitter to Gettr. With a matched dataset of 15M Gettr posts and 12M Twitter tweets, we show that users active on both platforms post similar content as users active on Gettr but banned from Twitter, but the latter have higher retention and are 5 times more active. Then, we reveal that matched users are more toxic on Twitter, where they can engage in abusive cross-ideological interactions, than Gettr. Our analysis shows that the matched cohort are ideologically aligned with the farright, and that the ability to interact with political opponents may be part of the appeal of Twitter to these users. Finally, we identify structural changes in the Gettr network preceding the 2023 Bras´ılia insurrections, highlighting how deplatforming from mainstream social media can fuel poorly-regulated alternatives that may pose a risk to democratic life.
Social media has always been controversial, with constant debate around which content should be permitted, which content should be banned, and the conditions under which a user should be deplatformed for breaking the rules [1, 2]. Particularly since the US Capitol insurrections, the deplatforming question has become a cornerstone of the polarized public discourse, with major social media companies facing increased pressure to deplatform malicious users [3].
Removing malicious accounts from social media helps protect other users and limits the spread of content which has the potential to cause harm [4, 5]. The scientific literature supports this view showing that many harmful communities are no longer active on mainstream platforms; these groups previously thrived by posting hate speech or conspiracy theories [4, 6–17], with their dense interaction networks facilitating a broad reach for their content [18].
However, the benefits of deplatforming for users on a mainstream platform do not account for the impact of these moderation policies on the wider social media ecosystem. Specifically, it remains unclear how banning accounts from one platform may drive migrations to weakly regulated and poorly monitored fringe alternatives where violent narratives may develop and thrive [9, 19–24]. This is in large part because data is rarely available which permits the cross-platform tracking of social media users, particularly following account suspensions.
sions. In this paper, we present a unique dataset which addresses this gap, focusing on a matched cohort of users who migrated from Twitter to Gettr, a Twitter-clone that has attracted many of Twitter’s most high-profile suspended accounts including US congresswoman Marjorie Taylor-Greene, media executive Steve Bannon, and conspiracy theorist Alex Jones.
Our dataset presents the near-complete evolution of Gettr from its founding in July 2021 to May 2022 including 15M posts from 785,360 active users who have posted at least once. Of these users, 6,152 are verified, 1,588 of which self-declare as active on Twitter (see Methods). For these 1,588 self-declared Twitter users with a verified Gettr account, we download their Twitter timeline from July 2021 to May 2022 totalling 12M tweets and retweets. These users represent the “matched” cohort, with analysis of their Gettr posts (Twitter tweets) referred to as “matched Gettr” (“matched Twitter”) below. For the remaining verified Gettr users, we use the Twitter API to identify those accounts which have been suspended from the platform, assuming accounts share the same username on both platforms, totalling 454 accounts who constitute the “banned” cohort. Finally, all remaining users who are not verified on Gettr are part of the “non-verified” cohort.
In the remainder of the paper, we will overview account activity and retention on Gettr, showing that the banned cohort are 5 times more active than the matched cohort. Despite this, our results will show that these two cohorts are structurally mixed on Gettr, sharing the same politically homogenous audience and posting similar content. Using matched cohort tweets, we will show that Gettr is generally representative of the US far-right, and that matched users are more toxic on Twitter than they are on Gettr. Finally, we will highlight how Gettr had a global impact, outlining the structural changes in the Portuguese-language Gettr network that emerged in the run up to the January 2023 riots in Brazil.
This paper is available on arxiv under CC BY 4.0 DEED license.