How Fake News Could Impact the 2020 Presidential Election

The Future of Election InterferenceIowa Count Chaos Comic

Social media platforms and tech companies are developing tougher user policies and methods for detecting and removing fake accounts and misleading or false information ahead of the 2020 election. 

Twitter and Facebook are at odds as to what role its platforms should play in our politics. Facebook and Twitter’s CEOs have differed as to what free speech and First Amendment rights and protections look like on their platforms.

Twitter is arguably taking a more aggressive approach in ferreting out disinformation and protecting users. It is removing manipulated videos, flagging misleading content, and developing criteria for handling misleading or false tweets. Although Facebook has taken steps to secure its platform ahead of the 2020 election, it faces growing criticism that it is not doing enough to prevent racism, hate speech and manipulated or false media. 

Here is a look at how the prominent social media companies are preparing for the 2020 election. This page will be updated according to changing policies and significant events related to how these companies are policing user and advertiser content.

U.S. Foreign Adversaries Are Using Similar But More Refined Strategies From 2016

Update: August 7, 2020

A series of recent Twitter hacks and warnings from top U.S. officials have cast doubt on whether social media and tech companies can protect users from coordinated disinformation campaigns and hackers --  similar but more refined, information weaponry foreign adversaries used in 2016.

The director of the National Counterintelligence and Security Center (NCSC), William Evanina, confirmed that foreign entities are actively seeking to compromise private communications of “U.S. political campaigns, candidates and other political targets,” and that the NCSC is keeping tabs on foreign and domestic threats to U.S. election infrastructure. 

Evanina said in a press release, however, that “the diversity of election systems among the various  states, multiple checks and redundancies in those systems, and post-election auditing make it extraordinarily difficult for foreign adversaries to broadly disrupt or change vote tallies without detection.” 

President Trump continues to push a theory that expanding vote-by-mail will lead to widespread voter fraud, which he used as a basis to advance -- and quickly withdraw -- a proposal for postponing the presidential election.

But government officials are more concerned about Russian disinformation campaigns and possible vulnerabilities in the social media accounts belonging to important figures.

Former Vice President Joe Biden’s campaign team announced that it faced multiple security threats, but did not provide specifics for fear of providing adversaries useful intelligence. The Biden campaign said, however, it was concerned that pro-Russian sources shared disinformation about Biden’s family with President Trump’s campaign and Republican allies in Congress.

The Biden campaign said it hired top cybersecurity officials in early July to address potential security threats and “enhance the overall efficiency and security of the entire campaign.” 

At least one Ukranian national told the Washington Post he had shared tapes and transcripts with Republican Senator Ron Johnson’s Senate Homeland Security Committee and with Trump ally, Rudy Giuliani. House Democrats subpoenaed Secretary of State Mike Pompeo for documents pertaining to Hunter Biden that he turned over to Republicans on Johnson's committee during their investigation into Hunter Biden, Joe Biden’s son. 

Democrats accuse Pompeo of using State Department resources to advance a “political smear campaign” against the Bidens. “It does a disservice to our election security efforts when Democrats use the threat of Russian disinformation as a weapon to cast doubt on investigations they don’t like," a Johnson spokesperson said.

In the same press release, Director Evanina stated “the coronavirus pandemic and recent protests...continue to serve as fodder for foreign influence and disinformation efforts in America.” Declassified U.S. intelligence shows that Russian military intelligence used its connections with the Russian government information center, InfoRos, and other websites to push disinformation about the coronavirus pandemic, such as amplifying false arguments used by the Chinese government that claim the virus was created by the U.S. military.

The strategy is similar to 2016: Russian bots employed by the Internet Research Agency, a private Russian group with Kremlin affiliations,  and other Russia-backed groups, used fake social media accounts to amplify disinformation and pro-Russian propaganda.

This time around, however, the fake news articles appear on websites that seem legitimate, which ultimately make it more difficult for American users to recognize. 

U.S. officials are primarily concerned with Chinese, Russian and Iranian operatives, who continue to use influential measures in social and traditional media “to sway U.S. voters’ preferences and perspectives, to shift U.S. policies, to increase discord and to undermine confidence in our democratic process.”

The concerns were outlined by Evanina in a statement issued by the NCSC: “China is expanding its influence efforts to shape the policy environment in the United States, pressures political figures it views as opposed to China’s interests, and counter criticism of China,” the NCSC said. “Beijing recognizes its efforts might affect the presidential race.

Russia’s persistent objective is to weaken the United States and diminish our global role. Using a range of efforts, including internet bots and other proxies, Russia continues to spread disinformation in the U.S. that is designed to undermine confidence in our democratic process and denigrate what it sees as an anti-Russia “establishment” in America.

Iran seeks to undermine U.S. democratic institutions and divide the country in advance of the presidential election. Iran’s efforts center around online influence, such as spreading disinformation on social media and recirculating anti-U.S. content.

The Great Twitter Hack

A 17-year-old Florida teenager was charged as the “mastermind” of a massive Twitter hack that targeted the accounts of important people, including Bill Gates, Joe Biden, Barack Obama, Kanye West and Elon Musk. The embarrassing incident for Twitter called into question its ability to protect high-profile figures and political campaigns from foreign, and domestic, adversaries. 

The hack was used to promote a bitcoin scam, which asked Twitter users to send bitcoin to a specific cryptocurrency wallet with the promise that the Twitter user would receive double their money back. Within minutes 320 transactions occurred, and $110,000 worth of bitcoin was deposited into the hacker’s account. 

Coinbase, a cryptocurrency exchange, prevented nearly 1,000 bitcoin users from sending $220,000 worth of bitcoin to the hackers account once the scam was discovered.  The “mastermind,” Graham Ivan Clark, faces 30 felony charges from the hack, including wire fraud, money laundering, identity theft, and unauthorized computer access, and is being charged as an adult. 

The hackers targeted Twitter employees and administrative tools, which allowed them to change many account-level settings, including changing passwords and posting Tweets. By the time Twitter finally managed to stop the attack, the hackers had tweeted from 45 of the accounts they had broken into, gained access to the direct messages of 36 accounts, and downloaded full information from seven accounts.

While Clark was charged as a minor by state law enforcement officials, federal authorities were already tracking Clark’s online activity before the Twitter hack, according to legal documents. In April, the Secret Service seized over $700,000 worth of bitcoin from him, but it was unclear why. 

Facebook

Facebook, the hot bed of Russian election interference in 2016, is taking steps to ensure the platform won’t be weaponized again in 2020. In a memo released in late 2019, Facebook announced several initiatives to “better identify new threats, close vulnerabilities and reduce the spread of viral misinformation and fake accounts.” 

Among them are:

  • Combating inauthentic behavior with an updated policy on user authenticity
  • Protecting the accounts of candidates, elected officials and their teams through “Facebook Protect”
  • Making pages more transparent, which includes showing the confirmed owner of a page
  • Labeling state-controlled media and their Pages in the Facebook “Ad Library”
  • Making it easier to understand political ads, which includes a new U.S. presidential candidate spending tracker
  • Preventing the spread of misinformation by including clear fact-checking labels on problematic content
  • Fighting voter suppression and interference by banning paid ads that suggest voting is useless or that advise people not to vote
  • Investing $2 million to support media literacy projects to help people better understand the information they see online 

Facebook has reportedly removed over 50 networks of coordinated inauthentic behavior among accounts, pages and groups on the platform. Some of these coordinated disinformation networks were located in Iran and Russia. 


According to Facebook’s “Inauthentic Behavior” policy, information operations are taken down based on behavior, not on what is said because much of the content shared through coordinated information operations is not demonstrably false, and would be acceptable public discourse if shared by authentic users. The primary issue at stake is that bad actors are using deceptive behaviors to promote an organization or particular content to make it appear popular and trustworthy.

  • Inauthentic behavior occurs when a user misrepresents themself by using a fake account, often with the aim of misleading other users and engaging in behavior that violates Facebook’s Community Standards. Inauthentic behavior is considered when an individual simultaneously operates multiple accounts, or if one account is shared between multiple people. By harnessing the power of multiple accounts, a user can abuse reporting systems to harass other users, or artificially boost the popularity of some content. Concealing the identity, purpose, or origin of accounts, pages, groups or events or the source or origin of content is considered inauthentic behavior.

Facebook Protect is designed to safeguard and secure the accounts of elected officials, candidates and their staff. Participants who enroll their page or Facebook or Instagram account will be required to turn on two-factor authentication, and their accounts will be monitored for hacking, such as login attempts from unusual locations or unverified devices.

Facebook is ensuring that Pages are authentic and transparent by displaying the primary country location of any given Page, and whether the Page has merged with others, which gives more context as to the origin and operation of individual Pages.

Many users fail to disclose the organization behind their Page as a way to obscure ownership or make the Page appear to be independently operated. To address this issue, Facebook is requiring that organizations and Page owners are visible and contactable. Pages that run ads about social issues, elections or politics in the U.S. require registration and verification.

Facebook labels media outlets that are wholly or partially under the editorial control of a government. These Pages will be held to a higher standard because they combine the opinion-making influence of a media organization with the strategic power of a government. 


To develop its own definition and standards for state-controlled media organization, Facebook executives sought input from more than 40 experts around the world who specialize in media, governance, human rights and development. 


Facebook considers several factors that indicate whether a government exerts editorial control over content, such as:

  • The ownership structure of the media outlet, such as owners, stakeholders, board members, management, government appointees in leadership positions, and disclosure of direct or indirect ownership by entities or individuals holding elected office
  • Mission statements, mandates, and public reporting on how the organization defines and accomplishes its journalistic mission
  • Sources of funding and revenue
  • Information about newsroom leadership and staff

Facebook has made its Ad Library, Ad Library Report and Ad Library API to help journalists, lawmakers, researchers and ordinary citizens learn more about the ads they encounter.

This includes a spending tracker to see how much each candidate in an election has spent on ads, and making it clear whether an ad ran on Facebook, Instagram or somewhere else.

Facebook and Instagram reduce the spread of misinformation by reducing the distribution of disinformation via the Explore Feed and hashtags. Content from accounts that repeatedly post misinformation is filtered to prevent appearance on the Explore Feed, or Facebook will place restrictions on the pages ability to advertise and monetize. 


Content on both Facebook and Instagram that has been rated false or partly false by third-party fact-checkers, who are certified through the non-partisan International Fact-Checking Network, will be prominently labeled so users can decide credibility for themselves.

Facebook prohibits content that misrepresents the dates, locations, times and methods for voting or voter registration, as well as misinformation about who can vote, qualifications for voting, whether a vote will be counted, and threats of violence related to voting, voter registration or the outcome of an election. 


Facebook’s Elections Operations Center removed more than 45,000 pieces of content that violated these policies, more than 90% of which their system detected before the content was reported. 


Facebook’s hate speech policy bans efforts to exclude people from political participation on the basis of race, ethnicity, or religion. Additionally, Facebook has banned paid advertisements that suggest voting is useless or meaningless, or that advises people not to vote. Facebook employs machine learning to identify potentially incorrect or harmful voting information.

Facebook invested $2 million to support projects that promote media literacy. The platform provides a series of media literacy lessons in its Digital Literacy Library. The lessons, created for middle and high school educators, are designed to cover topics ranging from assessing the quality of information to technical skills like reverse image search.

Facebook Still Allows Micro-Targeting of Political Ads 

In October 2019, Facebook CEO Mark Zuckerberg described the ability of ordinary people to engage in political speech online as “...a new kind of force in the world -- a Fifth Estate alongside the other power structures of society.”

In stark contrast with Twitter, which prohibits political advertising, Facebook has adopted a “hands-off” policy when it comes to policing who buys political ads and what they ultimately say. The platform's approach “is grounded in Facebook’s fundamental belief in free expression, respect for the democratic process and the belief that, in mature democracies with a free press, political speech is already arguably the most scrutinized speech there is,” according to Facebook’s head of global elections policy. 

But Facebook's hands-off policy is not without controversy. In late June 2020, a growing list of companies are threatening to boycott paid advertising on Facebook and Instagram to show support for a movement called #StopHate4Profit

Facebook faced heavy criticism in early June for allowing posts by President Trump to remain on the platform that many say were "glorifying violence" related to the George Floyd protests. Twitter attached a warning label to the controversial Tweet sent out by the president, which read "when the looting starts, the shooting starts," but Facebook said the post did not violate its rules

The Anti-Defamation League, the NAACP, along with other organizations Sleeping Giants, Color of Change, Free Press, and Common Sense, asked “large Facebook advertisers to show they will not support a company that puts profit over safety.” The six non-profit organizations' call to stop paid advertising say it is in response to “Facebook’s long history of allowing racist, violent and verifiably false content to run rampant on its platform.” The groups also say that the company allows its platform to be used in “widespread voter suppression efforts, using targeted disinformation aimed at black voters,” and “allowed incitement of violence to protesters fighting for racial justice in American in the wake of George Floyd, Breonna Taylor, Tony McDade, Ahmaud Arbery, Rayshard Brooks, and many others. 

In response, Unilever, the huge consumer products company, along with Pfizer, Ford and Coca-Cola announced they would halt advertisements on Facebook, citing growing concerns of racism and hate-speech on the platform. A spokesperson for Unilever said “there is much more to be done...in the areas of divisiveness and hate speech during this polarized election period in the U.S.”  

Companies like Patagonia, REI, and The North Face say they are also planning on stopping advertising in the month of July to show support for #StopHateForProfit. Zuckerberg responded that the platform would not change its advertising policy "because of a threat to a small percent of our revenue, or to any percent of our revenue." Since the campaign launched in early June, Facebook's has lost $60 billion in market value and nearly 500 companies have pledged to boycott advertising on Facebook. 

Twitter

In stark contrast with Facebook, Twitter CEO Jack Dorsey announced in October 2019 that political ads are banned from its platform, saying “We believe political message reach should be earned, not bought.” He added that “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes.” 


In early February 2020, Twitter announced new rules addressing deepfakes and other forms of synthetic and manipulated media. Twitter will not allow users to “deceptively share synthetic or manipulated media that are likely to cause harm,” and will start labeling tweets containing synthetic or manipulated content to provide more context. 


Twitter is using three criteria to consider Tweets and media for labeling or removal: 

  • Whether the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing
  • Any visual or auditory information has been added or removed, such as new video frames, overdubbed audio, or modified subtitles
  • Whether media depicting a real person has been fabricated or simulated

  • Whether the context  could result in confusion or misunderstanding or suggests a deliberate intent to deceive people about the nature or origin of the content
  • Assess the context by using metadata associated with the media, information on the profile or the person sharing the media, or website linked in the profile of the person sharing the media

  • Including threats to physical safety of a person or group
  • Risk of mass violence or widespread civil unrest
  • Threats to the privacy or ability of a person or group to freely express themselves or participate in civil events, such as: stalking, targeted content such as tropes, epithets, or material that aims to silence someone, and voter suppression or intimidation

In response to violating Twitter's policy of manipulated media, the platform will:

  • Apply a label to the tweet in question
  • Show a warning to users before they retweet or like the post
  • Reduce the visibility of the tweet on Twitter and/or prevent it from being recommended
  • And provide additional explanations or clarifications if available

The policy can target “cheapfakes,” or relatively low-tech media manipulation, such as the doctored video of Democratic House Speaker Nancy Pelosi that circulated last year. The video, which was simply slowed down, appeared to show Pelosi slurring her speech. People accused her of being drunk and took aim at her mental state. Despite being doctored, Facebook decided to not remove the video. But YouTube, which is owned by the parent company of Google, removed the video for violating the platform’s policies.

This chart describes the response Twitter will take when dealing with manipulated media. If the media shared on twitter is significantly and deceptively altered or fabricated, shared in a deceptive manner, or if the content is likely to impact public saftey or cause serious harm, twiter may take the following steps: content may be labeled; content is likely to be labeled or may be removed; content is likely to be labeled; or content is very likely to be removed. If the media is significantly and deceptively altered, there is a high likelihood it will be removed all together. If the media is shared in a deceptive manner, it is likely to be labeled or removed. if the content is likely to impact public saftely or cause serious harm, it is very likely it will be labeled and removed.

Source: Twitter

In late 2019, Twitter threatened to hide tweets of world leaders behind a warning label if their messages incited harassment or violence. Twitter also said it would mark tweets containing misinformation with labels that link to sites with reputable information. 

For the first time in late May 2020, Twitter added links on two of President Trump’s tweets, urging Twitter users to “get the facts.” The added links to Trump’s tweets came after years of pressure on Twitter over its inaction on the president’s false or threatening posts. But some are criticizing Twitter for what appears to be a lack of consistency in enforcing its policies. Twitter flagged two of Trump’s tweets that contained inaccuracies about mail-in ballots, claiming there is no way “that Mail-In Ballots will be anything less than substantially fraudulent.” Previously, Twitter stated Trump’s tweets did not violate the platform’s terms of service.

A screenshot of a tweet by Trump that says: there is no way (zero!) that Mail-in ballots will be anything lesss than substantially fradulent. Mail boxes will be robbed, ballots will be forged and even illegally printed out and fradulently signed. The governor of California is sending ballots to millions of people, anyone...Source: Twitter

Twitter and Facebook acted in solidarity in June 2020 to remove a doctored video posted by President Trump that showed two toddlers, one white, one Black, running down a sidewalk with a fake CNN headline that read: “Terrified toddler runs from racist baby.” Facebook removed the video over a copyright complaint, which eventually prompted its removal on Twitter. However, initially Twitter took a stronger, more pointed stance by labeling the video manipulated media per its policy.

The platforms removed the deceptive footage after a copyright complaint from one of the children’s parents, according to a CNN report. Trump used the video, which went viral last year, to suggest CNN had manipulated the context of the video to stoke racial tensions.

More Background on Information Warfare

The Cybersecurity and Infrastructure Security Agency (CISA) and Department of Homeland Security plan to work with local and state election officials, political campaigns and political parties to identify potential vulnerabilities to election infrastructure ahead of the 2020 election. The goal is to improve communications and security of voting infrastructure ahead of the election.
Data scientists from Guardians.ai, who work to disrupt cyberattacks and protect pro-democracy groups from information warfare, found that small clusters of accounts posting extreme or damaging messages were amplified by a broader group of accounts. These accounts “drove a disproportionate amount of the Twitter conversation about the four candidates over a recent 30-day period.”

This new style of information warfare represents a movement away from large numbers of easily detected fake accounts, but “centers on a refined group of core accounts” that are “highly sophisticated synthetic accounts operated by people attempting to influence conversations, while others are coordinated in some way by actors who have identified real individuals already tweeting out a desired message.” Fringe sites, like Reddit and 4chan have seen calls to action asking members to “quietly wreak havoc against (Sen. Elizabeth) Warren on social media or in the comments under news stories.” Guardians.ai has had difficulty tracing the source of these accounts. The article concludes that “the proliferation of fake news, rapidly changing techniques by malicious actors and an underprepared field of Democratic candidates could make for a volatile primary election season.”

Trump To Limit Social Media Legal Protections

President Trump signed an executive order on May 28 to limit the broad legal protections afforded to social media companies. The order states that the “growth of online platforms in recent years raises important questions about applying the ideals of the First Amendment to modern communications technology.” When presenting the order, Trump said a “small handful of powerful social media monopolies control the vast portion of all private and public communications in the United States.” 

The order takes aim at a 1996 law passed by Congress that protects internet companies from lawsuits over content that appears on their platforms. Overturning 25 years of judicial precedent by revoking section 230 of the Communications Decency Act will end liability protections for social media platforms and make them responsible for the speech of billions of users around the world post on the sites. 

  • Section 230 states that "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230). Essentially, Section 230 protects social media platforms from objectionable speech posted by its users because online intermediaries of speech are not considered publishers. 

Legal experts describe the executive order as “political theater,” and that it does not change existing law and will have no bearing on federal courts. 

Twitter CEO Jack Dorsey responded to Trump’s regulation threats, saying “Our intention is to connect the dots of conflicting statements and show the information in dispute so people can judge for themselves. More transparency from us is critical so folks can clearly see the why behind our actions.”

In contrast, Facebook CEO Mark Zuckerberg responded that social media companies should stay out of fact checking, saying that “private companies probably shouldn’t be… in the position of doing that.” Facebook refrains from removing content because, according to Zuckerberg, “our position is that we should enable as much expression as possible unless it will cause imminent risk of specific harms or dangers spelled out in clear policies.” 

In a move that echoes President Trump’s executive order, the Justice Department announced it is proposing legislation to curtail legal protections for social media platforms for the content that appears on their sites. The proposal is said to “update the outdated immunity for online platforms,” and incentive platforms to act responsibly. The department's recommendations fall into three general categories:

  1. provide online platforms incentives to address illicit content;
  2. clarify federal powers to address unlawful content; 
  3. and promote open discourse and greater transparency

Read more about the debate surrounding Section 230 and the Communications Decency Act here.

For more resources and articles on the future of election interference visit our resources pages.

To learn how to spot fake news visit our "Protect Yourself" page.

//Page written by: Christina Georgacopoulos & Grayce Mores