Taking Fake News to Task

In a new Science magazine article, Kelly M. Greenhill and colleagues explore ways to curb the spread of disingenuous information

wooden blocks with the letters for "fake" and "fact"

Every week it seems there are new revelations about how social media have been used to rapidly spread unverified—and often outright false—information. “We’ve witnessed similar phenomena following the introduction of new communications technologies throughout history,” said Kelly M. Greenhill, an associate professor of political science and director of the international relations program. “So the belief that somehow social media would be different was just naiveté—it is distressing, but not remotely surprising.”

She is currently finishing a book on the topic—tentatively titled Fear and Present Danger: Extra-Factual Sources of Threat Conception and Proliferation—and is one of sixteen co-authors of a new article on the science of fake news in Science magazine. The piece stemmed from a conference at Harvard last February on tackling fake news.

Tufts Now spoke with Greenhill about shifts in the media ecosystem, and how it will take an interdisciplinary village to restore trust and promote truth.

Tufts Now: How can researchers study the phenomenon of fake news in a neutral way?

Kelly M. Greenhill: The bad news is that we all bring our worldviews to the table. None of us is neutral: our ideas, experiences, and values influence how we comprehend and interpret the world around us. Some individuals are more wed to verifiable empirical evidence—facts, as we traditionally understand them—than others, but we are all susceptible to falling victim to “truthiness” (truth that comes from the gut rather than books).

“There are actors out there who are strategically manipulating and exploiting our information ecosystem; something needs to be done,” said Kelly M. Greenhill. Photo: Kelvin Ma“There are actors out there who are strategically manipulating and exploiting our information ecosystem; something needs to be done,” said Kelly M. Greenhill. Photo: Kelvin Ma
If the information on offer fits with what we already believe about the world, if it speaks to our fears and desires, and if we’ve heard it before, we will be more likely to treat it as plausible or true, irrespective of our political leanings, level of education, or socio-economic status.

This finding makes many highly educated individuals uncomfortable. Those who trust in the so-called marketplace of ideas, for instance, believe that if we just get the truth out there and give under-informed people the facts, the rest will follow and falsehoods will be vanquished. But facts alone won’t do the trick. They are important, beyond important—imperative—but insufficient on their own.

The good news is that we can establish a set of consensus-based evaluative criteria: things we should look for and questions that we should ask as we prepare to study and assess what constitutes fake news and its effects. We need to be clear about our definitions—what counts as fake news, and what doesn’t—and attentive to the possibilities and limitations of any single approach. We should also actively encourage multi-disciplinary approaches to the problem as well as external evaluation and replication of results.

How can we bring the so-called Internet oligopolies that “are already shaping human experience on a global scale,” according to the article, to account?

If it becomes clear that the platforms cannot be counted on to police themselves, then regulations may be needed. Given the current political climate and our history, we can’t count on such regulations originating in the United States. It probably has to happen at the European Union level or in a European country that then goes to the EU. The Europeans have much stronger privacy laws than we do. They have a different definition of what constitutes harm.

The EU has had some success in imposing regulations and levying fines, making these firms hurt in ways that caused them to modify their practices and business models. Such moves could affect attitudes and a sense of what’s appropriate practice on this side of pond. That’s not to say regulations are a panacea—going the regulatory route comes with potential dangers and costs.

But we’re talking about very serious issues of governance—about access to consumers’ preferences and browsing habits, which are subject to manipulation in ways that can cause harm on the individual level as well as national and international levels. There are actors out there who are strategically manipulating and exploiting our information ecosystem; something needs to be done. If firms are unwilling to take corrective and protective actions that may affect their bottom lines themselves, then it may be necessary for governments to take action.

The article ends with a question: “How can we create a news ecosystem and culture that values and promotes truth?” Where do we start?

What we are witnessing today is just another manifestation of what has been a long history of actors strategically cultivating and exploiting intra- and inter-group polarization and playing on people’s fears and cognitive biases. Recognizing the patterns and processes that undergird this kind of informational manipulation is a key step in combating it.

We can’t necessarily trust social media platforms to be value-neutral and reliable sources of information. We should routinely ask the same set of questions any editor asks a reporter before something goes to press. Has it been corroborated? By whom? What is their agenda? Trust but verify—it requires extra work and it’s hard and unpleasant because it complicates our lives. To just say, oh, this is a trusted news source, they must know what they’re doing—we can’t do that anymore.

It’s incumbent on all of us as individuals and also on media providers, social media platforms, and other information disseminators—everybody who is involved in the ecosystem—to take some responsibility. There is no other way to secure a culture that values facts and promotes truth, especially since truth can be more uncomfortable and unpalatable than “truthier” alternatives.

Courtney Hollands can be reached at courtney.hollands@tufts.edu.

Back to Top