Only by understanding the long and disturbing history of “alternative facts” can we diminish their disruptive power
Anyone following the 2016 presidential campaign could be forgiven for feeling a little confused. The country depicted in article after article shared over social media looked very little like the one we were living in. Islamic sharia law was reported to be spreading throughout the nation. The crime rate was soaring. Climate change was a hoax perpetrated by the Chinese to undermine the U.S. economy. The reality, of course, is that none of this was true. The stories were all examples of what is often called fake news.
Fake news stories appeared on both sides of the political spectrum during the campaign, but a majority of them originated with radical right-wing media outlets, said Kelly M. Greenhill, an associate professor of political science and international relations at Tufts. Meanwhile, Greenhill continued, additional unverified information was being generated and disseminated, with relative abandon, on the campaign trail by then-candidate Donald Trump and his supporters.
Traditional media’s debunking efforts had mixed results, in part because the introduction of real facts appeared to do little to change the minds of people who believed the stories, said Greenhill, who is writing a book on what she calls extra-factual information. “Moreover, every time journalists and fact-checkers would expose one fake news story, another would pop up to replace it,” she said. “Our media ecosystem began to resemble a game of Whack-a-Mole.”
Tufts Now spoke with Greenhill about the history of fake news, what people can do to stay informed in the face of it, and how vigorously debunking false stories can sometimes actually make things worse.
Tufts Now: “Fake news” was a big issue during the campaign. Are we done with it now?
Kelly Greenhill: No, the spread of dubious claims didn’t stop after the election. For instance, in February 2017, Kellyanne Conway, a senior member of the Trump administration, made references in interviews to two Iraqi refugees
Some observers predicted that Bowling Green would be a turning point.
Those declarations were unfortunately premature. Since Inauguration Day, Trump has asserted—without anything resembling evidence—that widespread voter fraud was the reason he lost the popular vote, that turnout for his inauguration was substantially larger than photographs indicate it was, and, most troublingly, that President Obama tapped the phones in Trump Tower last fall.
These kinds of claims feel new to a lot of people. But you’ve argued that they’re not really a new phenomenon at all.
“Fake news” is not a new term. It has been in widespread, if cyclical, use since at least the late 19th century. Trump may be unique in his willingness to introduce and spread unverified claims, but he and his supporters are hardly the first political actors to do so.
Throughout history and across the globe—from the most authoritarian governments, to the most liberal—we have seen similar dynamics in play. Savvy political actors, inside and outside government, frequently deploy unverified information to scare, persuade, mobilize and distract. Those are the end goals. The way they’re communicated is through rumors, conspiracy theories, propaganda, myths, false flag operations, entertainment media and, yes, fake news. I refer to these diverse sources collectively as “extra-factual information,” or EFI.
If the “Bowling Green massacre” failed as justification for the travel ban, what happens when EFI is effective?
If the conditions are right, EFI dissemination can enable dangerous ideas to enter public discourse and be treated as fact. This is especially the case when EFI-infused narratives offer more psychologically and politically palatable versions of events than the available facts.
EFI can be used to create scapegoats and villains to normalize prejudices and harden us-versus-them mentalities. During the Cold War, for instance, a combination of factors—from Senator Joe McCarthy’s Red Scare rhetoric to President Ronald Reagan’s Evil Empire speech—reinforced the notion that “godless Communists” posed a threat to the American way of life. Meanwhile, sources of EFI in the Soviet Union were used to spread the message that the “imperialist war-mongers” and “capitalist stooges” of the West were the ones that should be reviled.
Such appeals don’t always work, but mobilization—especially when based in appeals to fear—is often easier to catalyze than many like to believe. Mobilization can incite violence and even lead countries to war.
What are some examples?
For one, rumors spread in some American newspapers that the Spanish had sabotaged the USS Maine and committed a wide range of atrocities in Cuba reportedly helped drive the U.S. into the Spanish-American War in 1898.
Decades later, when the U.K. was trying to persuade the U.S. to enter World War II, British intelligence agents peddled baseless rumors to U.S. government officials that the Nazis were plotting to seize Bolivia. President Franklin Roosevelt then repeated these claims to the U.S. public during one of his “fireside chats.”
More recently, in the lead-up to the 1991 Gulf War, a young Kuwaiti woman gave false testimony before Congress that Iraqis were ripping babies from incubators and leaving them to die. She was later revealed to be the daughter of the Kuwaiti ambassador to the U.S.
You’ve said that EFI can be so powerful that it can incite mass violence.
In the most extreme examples, EFI has even been used to rationalize and build support for genocide. The Nazis made skillful use of it to horrifying ends. This included employment of The Protocols of the Elders of Zion, a fiction masquerading as fact that purported to be the minutes of a conference in which Jewish leaders discussed their plans for world domination. The work was widely circulated by the Nazis and made required reading for German schoolchildren. The noxious myths in The Protocols played a key role in the campaign of genocide that resulted in the murder of some six million Jews before the Nazis were defeated in 1945.
How is EFI employed to accomplish political and military objectives?
One method is to use EFI to turn the theater of crisis into a high art. That’s done via a process I call threat conflation. It’s an extreme manifestation of threat inflation, its better-known relative. Threat conflation entails marrying an often inchoate, but verifiable, source of anxiety and fear (such as irregular migration patterns) with a threat that is far more menacing, yet unverifiable (such as the claim that many refugees are terrorists). It’s coupling modest, but verifiable, threats with potentially catastrophic, unverifiable ones. And it can help generate support for a policy objective by artificially amplifying the public’s perception of a threat, even a seemingly nonexistent one.
Can you give us an example of all of this in action?
In 2014, Russia annexed Crimea and led military incursions into eastern Ukraine. To justify those acts of aggression as being both necessary and defensive, political actors deployed rumors that Russian-speaking Ukrainians were in danger. They also spread conspiracy theories about nefarious foreign actors being behind protests in Kiev and the ouster of Russian-backed President Victor Yanukovych.
In classic threat-conflation fashion, Russian President Vladimir Putin cited the real unrest in Kiev, along with genuine neo-Nazi involvement in some of it, and the documented World War II–era history of Ukrainian-Nazi collaboration. But he also coupled those facts with a blend of inflammatory and anxiety-producing EFI. In a March 2014 speech to the Russian people, Putin waxed poetic about the “inseparable role” Crimea played in the making of the once vast and powerful Russian empire, while also warning of the grave dangers posed by the “nationalists, neo-Nazis, Russophobes and anti-Semites” who, he said, had executed the coup against Yanukovych and continued to influence the country.
The underlying facts lent valuable credibility to Putin’s claims, while the emotionally potent EFI stoked fears.
Your research indicates that this kind of thing happens closer to home, too.
One example is when the administration of President George W. Bush decided to invade Iraq and remove President Saddam Hussein from power. Administration officials began coupling generalized fear and anxiety about terrorism after the 9/11 attacks—as well as historical concerns about Saddam Hussein’s regime—with an expanding array of unverified EFI. Starting in October 2001, the Bush administration began making the case that Iraq was pursuing weapons of mass destruction and was very likely willing to share these weapons with terrorist groups like Al Qaeda and was at least indirectly responsible for 9/11. None of these things was actually true.
How effective was that EFI campaign?
Immediately after 9/11, when Americans were asked who was behind the attacks, only 3 percent mentioned Iraq or Saddam Hussein. But in October 2002, Pew found that 66 percent of Americans erroneously believed Saddam Hussein aided the 9/11 hijackers. At the same time, 65 percent wrongly believed Iraq was close to having nuclear weapons, and 14 percent believed it already had them.
The media made mistakes in covering the issue of weapons of mass destruction in Iraq. What role should it play?
Media organizations can certainly play a watchdog role. At the same time, though, too much attention can actually backfire. When journalists feel compelled to waste valuable inches and broadcast hours covering nonsensical assertions as opposed to real news, our media ecosystem is further threatened and distorted.
Launching congressional investigations can also turn out to be a problem. Wasting time and taxpayer dollars simply to demonstrate the absurdity of unverified claims further erodes faith in our institutions of governance. The more time the government spends investigating specious claims about wiretapping, the less time and energy there is to investigate important issues such as to what degree Russia interfered in the U.S. presidential election.
Is it a question of emphasis?
Yes, that’s right. I have identified three key variables that determine belief in EFI. The first is worldview—does the EFI fit with what an individual already believes about the world? Second is threat perception—is the person hearing the story fearful of what the EFI portends? The third, repetition, is the single most powerful thing. So, when debunking efforts involve repeating unverified claims, they can unintentionally further cement EFI-based claims in the minds of audiences.
What can we do?
We need to ask more hard questions—and demand proof—when merchants of menace start spreading rumors of danger. When proof is absent, we need to be careful to refute the suspect claims without repeating them. Because EFI-based appeals may be grounded partly or even totally in fiction, but their power is a brute fact.