For the first time since Elon Musk’s takeover of X (formerly Twitter), the social media giant has launched a new Global Transparency Report showcasing insights into its internal enforcement methods. This is the first of its kind to report on activities since December 2021, which occurred around the time of Musk’s purchase of the platform in October 2022. This report illustrates X’s moderation philosophy and makes available data from the first part of 2024.
Released today, the 15-page report brings attention to a considerable evolution in the platform’s content moderation approach. According to Mashable, X is now prioritizing a method referred to as “restriction over removal,” meant to curb the reach of troublesome posts as opposed to outright deleting them. Between January and June 2024, X processed more than 224 million user reports and suspended more than 5 million accounts alongside the removal of more than 10 million posts.
Restriction Over Removal
Prior to this, the reporting process for Twitter was more open, and they issued reports biannually since the year 2012. These reports described the enforcement mechanisms via its Transparency Center, which stopped after Musk’s acquisition. Throughout his early days, Musk openly criticized the role of government in social media, specifically pertaining to the restriction of access to internal data and research. The recent report, however, points to a reversal toward transparency (though it does come with certain limitations).
Please follow us on Facebook and Twitter.
The platform outlines its fresh stance with the catchphrase ‘Freedom of Speech, not Freedom of Reach.’ Research indicates that “we base our policies and enforcement strategies on human rights… we restrict post reach only where justified, to help lessen discoverability rather than removing them.” This method suggests a heightened trust in limiting visibility instead of controlling content, in accordance with X’s goal of encouraging freedom of expression and preserving safety standards.
Enforcement Data & Suspensions
According to the report, significant figures are presented on enforcement approaches regarding various policy fields, including child safety, abuse and harassment, platform manipulation, and suicide prevention. Despite the focus on restriction, the data reveals X’s active role in account suspension: more than 5 million accounts experienced suspension during this time. According to the report, the concept of “rehabilitation” for offending users is unclear, with no particular explanation provided. Historical reinstatements of disputed figures indicate a shift in X’s approach away from indefinite bans.
Child Safety: A Critical Focus
A major part of the report covers X’s actions to fight against child sexual abuse material (CSAM). By the first half of 2024, X reported 370,588 cases of child exploitation to the National Center for Missing and Exploited Children (NCMEC), an astonishing increase compared to the preceding years. X (then Twitter) reported 86,000 cases to NCMEC in 2021, which rose to 98,000 in 2022. After X updated its enforcement criteria to suspend account holders interacting with CSAM content, even by ‘liking’ or sharing items, the number leapt to 870,000 in 2023.
An X spokesperson explained that the increase demonstrates a new, more aggressive methodology in detecting and suspending users handling CSAM-related media. In addition, the platform presented preemptive defenses to discourage any future connections. “There has been a rise in enforcements following these changes… the effect of these changes has been to deter users from either seeking CSAM or transmitting it,” according to the spokesperson.
Government Requests & Content Removal
An important area examined in the report is government data along with removal requests. Transparency towards governmental impact on content moderation had emerged as a key element of Twitter’s former reports. The information presented in its last pre-Musk report from 2021 showed that Twitter got 11,460 information requests from 67 countries, complying with 40.2% of those requests.
By 2024, there was a major expansion in those figures. The report by X indicates they received in excess of 18,000 government requests for user information alongside 72,000 content removal requests from as yet undisclosed countries. In 52% of the cases, the platform shared information, and it coordinated with 70% of the requests to remove content. The rise in government engagement reflects an important alteration in the firm’s willingness to comply with regulatory requirements.
AI, Safety, and New Features
This report coincides with X’s discreet moves to strengthen its security and safety teams as it prepares for the approaching election season. In addition to experimenting with generative AI technology, the platform is also posturing itself as a means for content moderation. The directive from Musk is leading X to fuse proactive defenses with adaptive tools, intent on balancing transparency, security, and the expression of ideas.
Not all alterations have been favorably accepted. Musk’s recent announcement about the upcoming removal of the site’s block feature has received backlash from users worried about how it could exacerbate online harassment. The initial transparency report from Musk’s administration provides a view of X’s developing moderation policies but fails to clarify many questions, especially about what it means to “rehabilitate” suspended accounts. The report has created a statistical standard for X regarding ‘limited’ transparency while conveying a hopeful future for routine enforcement data releases.