On April 25, NRATV, the media arm of the National Rifle Association, aired a segment in which host Grant Stinchfield alleged that Sen. Bernie Sanders supports giving terrorists the right to vote. The evidence he presented was a Twitter reply to pop star Cher, supposedly written by the 2020 presidential candidate, which was clearly a fake. The message had been sent by an unverified user, @Ryan35186711, who had simply changed his display name to “President Bernie Sanders.”
While Twitter deleted the faux Sanders account after Mother Jones flagged it in a set of questions directed to the company, the low-level hoax incident speaks to a larger dilemma faced by the crowded field of 2020 Democrats: how to combat misinformation and disinformation concocted by unknown adversaries, spread on social media and potentially picked up by media outlets with more reach or credibility. While the candidates have been given multiple briefings on cybersecurity from party leaders, and been advised on how to avoid foreign spies, when it comes to fortifying their campaigns against misinformation and disinformation, most thinking predates the invention of botnets and fake internet viral news.
In May of 2004, just months after Mark Zuckerberg founded Facebook as a Harvard undergrad, Swift Boat Veterans for Truth held a press conference to attack John Kerry’s military service record. Democrats took little notice initially, but after a summer of drumming up support from Republican operatives, the group was able to take its message mainstream with a half a million dollar advertising buy. The relatively small sum exploded into 24/7 coverage as the claims were amplified by Fox News and the mainstream media. By the time Kerry responded two weeks later, his poll numbers had dropped. In hindsight, strategists would say the delay in combatting the rumors was a fatal mistake for the campaign.
But now, attacks that echo those undertaken by the Swift Boat Veterans can go viral in mere hours, after little or no ad spending. Media outlets take their cues from social media platforms that, in 2016, were used for misinformation and foreign interference. Candidates facing similar charges to Kerry are forced to make a lightning-fast decision of whether or not to respond to a political smear.
“Bottom line, the prevalence of social media puts candidates into a position where they have to respond and be more proactive,” says Simon Maloy, a senior writer at Media Matters who got his start covering the media reaction to the Swift Boat veterans. “It forces them into a position of having to respond because there’s a direct path from the crazy Reddit forum, to Gateway Pundit, to Drudge, to Fox, and then the mainstream.”
While the 2016 election heralded the arrival of a new era in online disinformation, there’s still no consensus amongst political strategists over the best way for candidates to approach political conspiracies. Most campaigns were reluctant to talk about specific steps and plans to combat online disinformation; of the dozen campaigns contacted by Mother Jones, only that of California Rep. Eric Swalwell, who has made the Russia investigation central to his political identity, responded with a comment.
“Our staff is watching carefully for any signs of foreign interference, in the form of mis/disinformation or cybersecurity intrusions. If we find evidence of any such interference, against my campaign or any other, we’ll report it to the authorities promptly—unlike the Trump campaign in 2016,” according to a statement from Swalwell provided by his staff.
Not all candidates are keeping their operations behind the curtain. The Warren campaign has a dedicated and detailed “Fact Squad” website cataloging dozens of claims, acting as a digital resource to debunk conspiracies and propaganda about the candidate. While there’s ample space given to debunking the well-known claim she boasted of Native American ancestry to advance her academic career, most of the other rumors are quite obscure. One of the most prominent is a right-wing conspiracy that she takes Risperdal, a drug prescribed for schizophrenia. Supporters who want to “help spread the truth” can click on social sharing buttons that link out to the fact-checked content on Twitter and Facebook.
The site’s Risperdal response has just six shares and likes on Facebook and only 15 Twitter shares. The top Google result for related searches is dependably Warren’s fact-checking website, but the Facebook post that seemed to launch the rumor in January comes up near the top of results. That post, which is still live on Facebook, has hundreds of likes and shares.
“It’s not clear that more aggressive strategy wins out,” says Brendan Nyhan, a political scientist at the University of Michigan who researches political conspiracies and correcting misconceptions. “It’s appealing to people, the idea that you should fight back more aggressively. But it’s not clear that you should.”
Rather than respond aggressively to a rumor that’s still on the fringes, Nyhan says campaigns should aim to prevent the mainstream media from picking it up.
“The question is, what’s the net effect? Is fact checking attacks on the candidate giving attention to smears or helping to rebut them?” says Nyhan. “The most important factor in the spread of these claims is if the media is covering them and promoting them.”
According to Nyhan, rumors and allegations tend to take off when they coincide with some sort of perceived character trait. For instance, the conspiracy that Obama wasn’t born in America was linked to hostility toward his candidacy to be, and his service as, the first African-American president. Despite her campaign’s aggressive efforts to dispel the rumors, Hillary Clinton’s supposed poor health allowed opponents to reinforce the idea that she wasn’t physically or mentally fit to be president.
Even armed with the truth, trying to dismantle a conspiracy can be a losing battle. For instance, 2012 study showed that the release of Obama’s birth certificate, after years of claims he was born abroad, only temporarily increased the number of Americans who believed he was born in the U.S.
While newly formed campaigns may still be working out strategies, major tech companies and the DNC were both able to use the 2018 midterm election as the first big domestic test of new policies and practices. During the 2018 campaign, Facebook introduced a team it dubbed its “War Room” to deal with misinformation problems, including content encouraging voter suppression. According to a Facebook spokesperson, the operations center at Menlo Park took down 45,000 pieces of voter suppression related content alone, 90 percent of which the company said was caught by Facebook employees or algorithms before a user flagged the content. Some of the changes Facebook introduced ahead of the 2018 elections, including tools to increase ad transparency, have been expanded to elections in Europe, Asia, and Africa.
Google has also been offering free training to campaign professionals. A spokesperson says the company trained more than 1,000 political workers, along with the major Republican and Democratic party committees, on email and campaign website security in 2018.
Bob Lord, the chief security officer of the Democratic National Committee, says the party has encouraged campaigns to think about preventing disinformation proactively, just as they would prevent cybersecurity incursions by deploying safety measures like two-factor authentication.
Lord recommends candidates create as much friction and financial disincentives as possible to deter attackers. “They have to start thinking about changing the economics for the attackers,” says Lord. “So our adversaries are going to have to work harder and spend more time and money.”
Lord says that while the DNC has tried to increase the volume and depth of information about cybersecurity, misinformation, and disinformation it sends to campaigns, ultimately each is responsible for its own operations and relationships with social media companies.
“Disinformation is actually a cybersecurity problem. And that’s because we’re not talking about natural organic sentiment, we’re talking about people whose job it is to create tension or to create confusion or to, in some cases, discourage people from voting,” says Lord. “We always start with this idea that we’re up against real threat actors as being a major component of what we think about cybersecurity, but also disinformation.”
In 2016, media reports said Facebook had offered onsite campaign support to both Donald Trump and Hillary Clinton during the general election. (Clinton reportedly declined the services, while Trump campaign manager Brad Parscale credits much of the campaign’s success to the platform.) When asked if the company plans to do so again, a spokesperson disputed that they had ever offered such a service in 2016. The spokesperson told Mother Jones that the company had pulled back from any onsite support and was now focusing on providing political campaigns assistance via its website. The spokesperson said they could not clarify what that support will look like for general election candidates.
“We already have a dedicated team for the 2020 elections,” says Tom Reynolds, a Facebook spokesperson. “We’re actively coordinating with the campaigns directly,” he said, providing “basic training on how to use Facebook’s products and provide support with our ads authorization process.”
Even with all the right tech tools, campaigns facing conspiracies still find themselves facing a monumental task and a string of difficult decisions.
“They’re stuck between two bad choices,” says Maloy. “They do have to grapple with the existence of these conspiracies and the bad actors that promote them online. The choice that they have to make is ‘Do we ignore this and trust that nothing comes of it before it catches on?””