Time is running out to fight disinformation in 2020 election

By Janine Zacharia | Aug. 31, 2019

Fewer than 15 months before the U.S. presidential election, there is shockingly little national discussion of how to prevent the Russians — and other adversaries — from attacking our democracy with another round of information warfare.

“What has pretty much continued unabated is the use of social media, fake news, propaganda, false personas, etc. to spin us up, pit us against each other, to sow divisiveness and discord, to undermine America’s faith in democracy,” FBI Director Christopher Wray said in April.

Social media companies know this. That’s why they’ve bolstered their efforts to thwart coordinated disinformation campaigns, which they failed to counter in 2016. Among their new strategies is to be more public about their actions. In the last few weeks, Facebook said it shut down a suspected Saudi propaganda campaign and Facebook and Twitter suspended accounts linked to Chinese state-funded actors spreading disinformation about protesters in Hong Kong.

But in this game of Whac-a-Mole, any response will be imperfect. So most pressing is to understand how to inoculate ourselves against the inevitable flood of falsehoods. Decades of research into human cognition should guide the response.

For starters, even if fact checkers disprove false information, it will be hard to persuade people who naturally want to believe there’s at least a kernel of truth there.

When a person sees a message repeated over and over, even a false one, “you tend to believe it more,” Lee Ross, a Stanford professor of psychology and expert on human judgment, said at a recent meeting of Stanford’s Information Warfare Working Group, a gathering of psychology researchers, engineers, political scientists, former U.S. officials and other area experts that I’m part of. The group is meeting regularly over a two-year period to study the foreign disinformation threat and make policy recommendations to combat it.

Fact checkers will debunk an untruth once. The danger of bots like those used during the 2016 Russian campaign, “is that they can do something even the most energetic person can’t do, which is send you the same message, or a variance of the same message, a hundred times,” Ross said. Repeat exposure wins. And “when something is congruent with our beliefs, we don’t second-guess it.”

Put simply, if the media show why something that went viral is false, it’s unlikely to change the minds of those who wish it was true.

Even as we pride ourselves on being rational thinkers, research shows the way we process information depends more on the feelings it generates, said Paul Slovic, a University of Oregon psychology professor and expert in decision-making. “We rarely resist the feelings that certain types of information convey in us,” he told the working group.

Eliminating false content will have free speech advocates crying foul. But at least social media companies could promote credible information, down rank bunk, and ensure algorithms don’t refer people — repeatedly — to unreliable information based on search queries. They could reduce exposure to lies.

Another reason this is essential is because some studies have shown labels or warnings that something is true or false aren’t effective. Teaching critical thinking skills or getting people to simply think harder won’t work either. That “works well in the lab, but it’s hard to imagine applying it to the web,” Ross said.

Companies need to identify propaganda campaigns quickly because there may be little the platforms can do once a belief has taken hold. Anthony Pratkanis, an expert on the science of social influence and professor emeritus of psychology at UC Santa Cruz, said in this social media-fueled era of “propaganda on steroids” it’s going to be impossible to win the battle of ideas.

“You’re going to play to a tie,” Pratkanis told the working group. “That’s the best you can do.”

Still, Pratkanis said making propaganda campaigns transparent is “very helpful.” Preventing micro-targeting of individuals by data-mining organizations like Cambridge Analytica, which worked with the Trump campaign, he said would help too.

This isn’t all on the social media companies. Until now, lawmakers have been all bark and no bite. Senate Majority Leader Mitch McConnell is blocking an array of election security bills including one that would compel social media companies to publicly disclose who is behind political ads in some of the ways that are required of print and broadcast ads. The companies are starting to do this. But making it mandatory could help.

Another Senate bill being blocked would protect the personal electronic devices and accounts of senators and Senate staff from cyberattacks. Regardless of whether this passes, someone urgently needs to secure the email accounts and devices of candidates and political groups against hacking so that personal information isn’t again exposed as it was in 2016. The credible, fact-based media should develop understandings on how to report on stolen information lest they end up helping a foreign disinformation campaign.

One move that would radically slow the spread of disinformation, would be to remove the retweet or share functions. People could still post what they find interesting. But fake material wouldn’t spread like wildfire. This won’t happen since the platforms are built around the sharing functionality. But the platforms could at least discourage the spread of dubious information by signaling which accounts have the highest credibility scores based on the reliability of the user’s posts. Make it reputationally costly to forward fake content.

Russia, as the FBI director noted, will continue to weaponize our most popular communication platforms to undermine our democracy. But we shouldn’t abet the effort with shares and retweets. Spreading falsehoods shouldn’t be free.

Janine Zacharia, a former Washington Post reporter, is the Carlos Kelly McClatchy lecturer in the department of communication at Stanford University.