When Twitter banned more than 70,000 traffickers of false information from its platform in the wake of the violence at the U.S. Capitol on Jan. 6, 2021, the impact went beyond the silencing of those users.
A study co-authored by UC Riverside public policy and political science scholars published in the journal Nature on June 5, found that the crackdown by Twitter (now called X after it was acquired by billionaire Elon Musk in late 2022) also significantly reduced the number of misinformation posts by users who stayed on the platform but had been following those who were kicked off.
Additionally, the study found that many of the misinformation traffickers, including those who posted Q-Anon conspiracy theories, left Twitter on their own accord after the massive de-platforming, which included the banning of then-President Donald Trump.
“There was a spillover effect,” said Kevin M. Esterling, a UCR professor of political science and public policy and a co-author of the study. “It wasn’t just a reduction from the de-platformed users themselves, but it reduced circulation on the platform as a whole.”
It was the first time such an effect had been shown, he said.
The researchers analyzed a panel of about 550,000 Twitter users in the United States who were active during the 2020 election cycle. This information was acquired by David Lazer, the corresponding author of the study, who is a professor of political science and computer and information science at Northeastern University in Boston.
A research team from Lazer’s laboratory collected Twitter posts through Twitter’s application programming interface, or API, which is a set of programmatic tools that allowed researchers to interact with Twitter’s platform and gather tweets and other information about users of the platform. The users in the panel were verified as real people by cross-referencing with voter registration data.
The analysis found that those who had followed one or more of the 70,000 who were de-platformed had been more frequent tweeters of URLs (Internet addresses) known to disseminate misinformation when compared with others in the panel of users.
The research also identified about 600 “super sharers” of misinformation in the panel who were in the top 0.1 percent of misinformation sharers in the months leading up to the Jan. 6 insurrection. The analysis found their ranks dropped by more than half after the de-platforming. Similarly, some 650 Q-Anon sharers in the panel dropped to about 200 two weeks after the de-platforming.
When monitoring their platforms, social media companies face a tradeoff between private economic interests and the public interest, said Diogo Ferrari, co-author of the paper and a UCR assistant professor of political science. Fake news posts increase engagement, which helps a platform’s bottom line. But curbing it “is good for democracy and democratic governance,” he said.
The study’s title is “Twitter’s post-January 6 de-platforming reduced the reach of misinformation.” In addition to Esterling, Ferrari, and Lazer, its co-authors are Jon Green of Duke University and Stefan McCabe of the Institute for Data, Democracy and Politics at the George Washington University.
More information:
David Lazer, Post-January 6th deplatforming reduced the reach of misinformation on Twitter, Nature (2024). DOI: 10.1038/s41586-024-07524-8. www.nature.com/articles/s41586-024-07524-8
Provided by
University of California – Riverside
Citation:
Study shows banning false information traffickers online can improve public discourse (2024, June 5)
retrieved 28 June 2024
from https://phys.org/news/2024-06-false-traffickers-online-discourse.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.