More disinformation and “toxic material” is likely on platforms, Fletcher School professor argues

“Community notes haven’t been very effective,” as a form of fact-checking on social media, says Bhaskar Chakravorti, dean of global business at The Fletcher School. “One problem is that the community interventions often happen too slowly and miss the window when a problematic post is most viral.” Photo: Adobe Stock
Meta’s recent announcement that it will discontinue use of third-party fact-checkers on platforms like Facebook and Instagram in the United States has sparked fears of a new era of disinformation on social media. Meta is switching to a “community notes” model like that used on the social platform X, which could lead to “an increase in toxic material,” says Bhaskar Chakravorti, dean of global business at The Fletcher School and coauthor of the forthcoming book Defeating Disinformation with Fletcher Professor of International Law Joel Trachtman.
Here, Chakravorti discusses how the elimination of fact-checking may affect our social media feeds.
How effective has third-party fact-checking been—and is there any truth to the claims that it’s been politically biased, as Meta CEO Mark Zuckerberg has said?
There’s absolutely no question that it’s been effective in labelling and stopping some egregious disinformation; but it is far from perfect because there are so many ways in which problematic content is created and spread—and the sheer volume of it makes it hard to chase it all down.
Has it been disproportionately in one direction? Yes. And is it because of political bias? Not necessarily. That’s because a larger part of the questionable content has been coming from the right—and so a disproportionate amount of what ends up getting checked is also on the right. That’s just the reality.
Will crowdsourcing “community notes” be an effective replacement?
The jury is still out on that. If you look at platforms like Wikipedia or Reddit, there is evidence that with appropriate systems in place, a community can be effective in correcting misinformation.

When it comes to social media, however, the community notes model at X has failed completely. There are many reasons for that—not the least being that the owner of X, Elon Musk, is himself the largest disseminator of disinformation. We did a study that showed that after he took over X, the volume of anti-LGBTQ and antisemitic sentiments and other hate speech spiked.
Even putting aside mega users like Musk, however, community notes haven’t been very effective on X. One problem is that the community interventions often happen too slowly and miss the window when a problematic post is most viral. In other instances, community corrections haven’t been as reliable as institutionalized fact-checkers.
How do you think this change will affect the experience of users on social media platforms?
To gauge the impact of these changes, it is useful to keep them in context. Technically, a rather small proportion of Facebook’s total population will be affected by this change, because it applies only to the United States—which accounts for only about 6% of Facebook users worldwide.
While we’ll see more problematic content both because of the abandonment of third-party fact-checking and the emboldening of extreme right-wing groups as a second, more muscular Trump administration gets under way, we are likely to see less of the massive amounts of toxic materials on Meta platforms generally as we see on X. An overflow of problematic content would cause advertisers to flee, which would be an issue for Meta, as it’s publicly traded. I don’t think Meta’s executives are going to let things go that far. Mark Zuckerberg himself also doesn’t tend to have the same kind of media persona that Elon Musk has, which could also limit the exposure of toxic material inspired by egregious actions from the top of the organization.
“The danger of dismantling even the currently inadequate guardrails on social media platforms is that disinformation will keep ballooning in parts of the world that have the least societal and institutional safeguards.”
With the global reach of social platforms, how might the elimination of fact-checking affect users in other countries?
The challenge is there is a patchwork of requirements that vary from country to country. I don’t know if Meta will give up fact-checking in other countries, but if they do, then the EU is going to be much more disciplined in policing and enforcing its laws. It will level some fines on the company when this happens—but a lot of those fines may amount to little more than a rap on the knuckles.
A bigger problem is that once you start detaching yourself from third-party fact-checkers in the United States, where a bulk of Facebook’s money is spent, then the biggest source of revenue for those fact-checking organizations goes away, and the quality of fact-checking will fall everywhere. In other parts of the world, there will be a ton of stuff that will be quite toxic, and that could potentially lead to violence targeting already disadvantaged communities and other bad outcomes.
All over the world, elections are being contested, civil wars are being fought, and minorities are being harassed, and people are using social media platforms to communicate. The danger of dismantling even the currently inadequate guardrails on social media platforms is that disinformation will keep ballooning in parts of the world that have the least societal and institutional safeguards.
How do you think this might affect the social media market? Will users continue to move to Bluesky and other alternative platforms?
Bluesky got a boost from multiple factors, including people desperately looking for an alternative to X since the election. We’ve seen this in every election—depending on which party wins, the “digital opposition” on the other side gets a boost. Right-wing outlets grew after Biden’s election, and now Bluesky has already reached 26 million users, which is impressive.
However, for a platform such as Bluesky to be viable, it needs to get much larger and benefit from network effects—where the utility of a platform to each user grows as more users accumulate on the same platform. If the level of toxicity on Facebook, Threads, or Instagram doesn’t reach the level of X, then we may not see a critical mass of people leaving the Meta platforms.
What kind of solutions do you propose in your book to the problem of disinformation more generally?
Here’s the idea behind our book: We are attempting to tackle the problem of disinformation, a phenomenon that is borderless because it’s in all of these global platforms, but the way we are trying to corral it is through regulators in individual countries. The regulators’ authority and oversight are limited to their own jurisdictions. As a result, there is a mismatch between the global scale of the problem and the limited geographic reach of the regulatory tools.
This is a difficult problem, but we derive hope from the fact that we’ve dealt with similar global issues before, like financial corruption, counterterrorism, and pandemics, and we have found ways to devise systems that work across much of the world. Our book is predicated on the idea of learning from these to combat the scourge of disinformation and strike a balance between defending free expression and preserving free societies.