Social Media Warfare: The Final Frontier

By: Joseph Falk

Social media has emerged as a powerful tool to connect with families and loved ones, share ideas and career interests, and other forms of expression via virtual communities and networks. Platforms such as Facebook, Twitter, YouTube, and a growing number of others have become an integral part of life. These digital providers enable users to connect with millions of users around the world in a matter of seconds. However, over the past decade, state and non-state actors have manipulated content on social media which profoundly affects the narrative of public discourse in the United States. Through assessing the outlook of the threat environment originating from social media, the analysis indicates state and non-state actors will continue to threaten the United States and its interests through the information warfare tactic of disinformation campaigns.

Background

Social media presents adversaries with an unconventional weapon to exploit the vulnerabilities of the United States, and unlike other forms of asymmetrical warfare, social media is readily available to access by everyone, and requires little to no skill to use. These information operations are cost-efficient, as attackers requiring few materials, such as hardware and manpower. Adversaries can use various tactics and methods to target an opponent, including social manipulation. Actors employing this method aim to reshape the attitudes and beliefs, drive a particular agenda, or elicit certain actions out of a target audience (Fredrick, 2019). These objectives are commonly achieved through the use of disinformation campaigns. This type of information is the deliberate spread of false information with the intention to deceive an audience. Disinformation differs from misinformation which is defined as the unintentionally distribution of false or inaccurate information.

Adversaries exploit social media as a tool for recruitment and for the fast spread of extremist ideologies. Non-state actors, such as the Islamic State of Iraq and Syria (ISIS) and other terrorist groups, apply this method. These organizations use social media to spread their ideologies to a global audience in order to draw in people that are susceptible to their messaging and beliefs. For example, ISIS at the height of their operations had an estimated 40,000 foreign nationals from 110 countries in their operations (Ward, 2018). The government has the capability to monitor these sites, however, due to the large number of websites it is difficult to watch them all. State and non-state actors disseminate information on social media at an extremely fast rate and large scale. They rely on automated messaging systems within platforms to ensure immediate systematic deliveries. The most common are programs that can run multiple accounts at a single time, which remove the need for oversight. These autonomous programs are known as bots, which can use computer algorithms to produce content and interact with human users. In 2017, it was estimated that between 9% and 15% of all active Twitter accounts are bots (Varol et al., 2017).

The utilization of bots in propaganda and disinformation campaigns has become a mainstay of hostile nation-states, and while limited in their effectiveness, the attribution of these types of cyberattacks remains a challenging task. Bots have become infamous for their nonhuman tendencies, such as their unintelligible speech, newly created accounts, or low follower count, of which limit their effectiveness. To hide these deficiencies, accounts can be manually operated by human users, which are called trolls. State and non-state actors have notoriously applied trolls to disseminate information at an institutional capacity, creating an army of hundreds of personnel to operate accounts, in what is known as “troll farms.” The employment of bots and trolls is formidable because this type of cyberattack is difficult to identify. Data packets traveling through the Internet contain information about its source and its destination (Greenemeier, 2011). Cyber attackers, however, can spoof their Internet Protocol (IP) packets with a false IP source to protect the identity of the attackers. Additionally, state and non-state actors can also use a virtual private network (VPN), which connects the attackers to a proxy server before connecting to the Internet, thus allowing them to cover their tracks (Greenemeier, 2011).

Nefarious actors apply offensive cyber-attack capabilities to spread disinformation through spear-phishing attacks. This type of attack baits an individual or an organization into opening a malicious link or by opening an email attachment that harbors a malware payload (software that enables a user to gain unauthorized access or wreak havoc). Spear-phishing attacks send the victims to a fabricated website that asks for login credentials (credential spear-phishing) or download software directly onto the victim’s computer that are drive-by downloads (Bossetta, 2018). More commonly found in emails, spear-phishing via social media has increased by 500% in 2016, with state-actors accounting for 25% of successful phishing attacks (Bossetta, 2018).

To most Americans the threat of foreign actors trying to influence domestic affairs seems trivial, yet the dissemination of disinformation is not unique to social media. The height of this type of warfare occurred between the Warsaw Pact and NATO nations during the Cold War. According to leading technology information professors Roy Godson and Thomas Rid, over 10,000 individual disinformation operations were carried out during the Cold War alone, involving as much as 15,000 personnel (U.S. Senate Select Committee on Intelligence, 2019). Recently, however, the United States has entered the dimension of “total war,” shifting the battlefield past the frontlines and into the homes of most Americans through cyber operations.

Information Warfare (IW) has lasting implications beyond the intended purpose of altering narratives and fostering conspiracy theories. One of the most infamous examples of propaganda emerged from the Soviet Union in the 1980s. Commonly known as Operation INFEKTION, the Soviet Security Agency (KGB) ran a series of disinformation campaigns promoting the idea that the United States was responsible for the creation of HIV and AIDS. The story was originally published in a Soviet-funded Indian newspaper, The Patriot, in 1983. The article featured an interview with an American scientist claiming that the virus was a “result of the Pentagon’s experiments to develop new and dangerous biological weapons” (Bates, 2010). By late 1987, the theory had appeared in more than 200 publications in 80 countries, and a study in 1992 found that 15% of Americans believed the creation to be deliberate (Bates, 2010).

Historically, the spread of disinformation was restricted in scope and scale by the limitations of technology, but social media has increased the speed, size, and how data exfiltration has expanded. Social media has seen its activity skyrocket over the past decade; according to Statista, there are more than 3.6 billion users worldwide, which is almost 1 billion more users since 2017. As of 2020, Facebook had 2.7 billion monthly active users and Twitter had more than 330 million users as of the second quarter of 2020 (Statista, 2020). Nationally, Americans have become accustomed to social media being a part of their daily lives. In 2019, over 72% of all Americans used some form of social media, representing over 200 million people (Pew Research Center, 2021). This is a drastic rise from more than a decade ago in 2005, where only 5% of American adults used one platform. The increase is compounded by the reliance Americans have grown to have on social media. According to a recent Pew Research Center study, 62% of Americans get their news via social media platforms. The reliance of the majority of Americans on social media for their news poses a possible threat to public knowledge through fast moving disinformation attacks. Scientists at the Massachusetts Institute of Technology (MIT) have analyzed more than 126,000 stories, determining that misleading or incorrect stories travels six times faster than the accurate ones (Dizikes, 2018).

Methodology

This paper utilizes the structured analytic technique (SAT), chronology, in order to examine the probability of disinformation and other social media attacks continuing to affect the United States and its allies. Chronology is the depiction of a series of events in the order in which they occur. It puts into context the relationship between events and the time in which they take place. This technique can be applied to raw data to better understand the timing of events, identify gaps, and trends, and to place events into context (Kaiser & Pherson, 2014).

The analyst applied Intelligence Community Directive (lCD) 203 to determine the probability of state and non-state actors using IW operations against the United States and U.S. elections. The purpose of ICD 203 is to establish analytic standards that apply to the intelligence community’s products. The researcher uses a numerical range from 0 to 99% to determine the level of probability. The range is broken up into different increments measuring the confidence, with the lowest score of 1-5% indicating an “almost no chance” and the highest score of 95-99% indicating that it is “almost certain” (Office of the Director of National Intelligence, 2007).

Analysis

            Through applying chronology, it showed a trend of information warfare becoming a common operational tactic used in cyber operations against the U.S. and allies. Social media is used as an active measure by both state and non-state actors. Figure 1.1 illustrates the information operations completed by the Russian Federation and other state and non-state actors. These actions emphasize that social media is used as a systematic tool by both state and non-state actors against the U.S. and its interests.

Figure 1.1

Year-Social Media used in Information Warfare
2013-First known Russian social media campaign operated in Ukraine
2014-Mass Protests in Ukraine on corruption and anti-western oppressive policies of the regime
-Russian interference in Western election in Scottish independence referendum
-IRA begins operation against U.S. through “Project Lakhta”
2015-Russian bots are active on Twitter supporting narratives beneficial for Russia in western regimes
2016-Alleged Russian influence in 2016 Brexit referendum in United Kingdom
-Russia increases efforts in U.S. 2016 Presidential election
2017-Release of FBI Robert Mueller Report
2018-Facebook Mark Zuckerberg testifies to Congress on social media misinformation
2020-Russian interference in U.S. 2020 presidential election

The Russian Federation under President Vladimir Putin’s agenda has become one of the primary nation-states engaging in misinformation to undermine the United States global hegemony. Just as Putin said during a Saint Petersburg International Economic Forum in 2017, “fingerprinting does not apply to cyberspace” (University of Washington, 2017). During the Ukrainian crisis, Russia began their systematic manipulation of social media (2013). The Ukrainian crisis ensued when former President of Ukraine, Viktor Yanukovych, suspended association agreements pertaining to integration into the European Union (EU). This sparked mass protests from proponents of the agreement against the Ukrainian government, resulting in the ousting of Yanukovych in February 2014.

While many Ukrainians protested corruption and anti-Western oppressive policies of the regime, Russia portrayed a different, pro-Russian, narrative in Crimea. Russia uses state-run media including Russia 24, NTV, Channel One (ORT), and other various platforms to provide the message that the Crimean Peninsula is anti-Ukrainian and pro-Russian following President Viktor Yanukovych’s exile from Ukraine (University of Washington, 2017). Russia’s military intelligence service launched a covert operation to win the people of Crimea through creating a series of fake profiles on Facebook and its Russian equivalent, VKontakte. The profiles represented ordinary citizens who became disappointed with opposition protests at Kiev’s central square, the Maidan. One of the most notorious examples is a man claiming to be called Ivan Galitsin, who was a convicted Russian drug smuggler serving time in a U.S. prison. Galitisin posted a comment to a British newspaper article, claiming that “there was a coup in Ukraine” (Nakashima, 2017). Russia exploited existing ethnic divisions within Ukraine to further Putin’s agenda to annex Crimea through using troll organizations and bots. Domestic support for Russia’s actions were gained by government bots being employed into the political conversation to force the narrative in favor of Putin. Between 2014 to 2015, as much as 85% of the active Twitter accounts in Russia tweeting about politics were government bots (Nemr & Gangware, 2019). This was shortly followed by Russia’s first attempt at interference in a western election.

The Russian Federation is credited with interfering in European elections; according to the long anticipated “Russia Report,” written by the British Intelligence and Security Committee of Parliament (ISC), the report concluded that Russia “undertook influence campaigns in relation to the Scottish independence referendum in 2014” (Daisley, 2020). The consensus behind Russia’s motive was simple: If Scotland gained independence, it would weaken the UK and adversely affect the North Atlantic Treaty Organization (NATO). When the referendum took place on September 18th, 2014, the “No” vote emerged victorious over the “Yes” vote by a margin of 55.3% to 44.7% (Government of United Kingdom, n.d.). Surrounded by much speculation, the “Russia Report” does not answer whether Russia attempted to influence the results of 2016 United Kingdom European Union membership referendum, commonly known as Brexit. The referendum gave the electorate the opportunity to vote if the UK should remain a member of the EU or leave, under the similar vein of the 2014 Scottish referendum. The vote passed by a narrow margin with 52% of the population voting in favor to leave the EU (BBC, n.d.). The report avoided Brexit altogether but does note that Russian influence has become commonplace in UK politics.

Russian influence efforts on Western elections targeted the United States in 2016; the main form of manipulation was originating from a St. Petersburg based troll farm entitled the Internet Research Agency (IRA) (Mueller, 2019). The IRA conducted social media attacks by disguising themselves as fictitious U.S. personas to target Americans with the intent of “sowing discord in the U.S. political system” (Mueller, 2019). The group is part of a larger Russian information apparatus known as “Project Lakhta” (Mueller, 2019). The IRA began operations in the Spring of 2014 and consolidated all departments pertaining to the United States into a single department known as the “Translator.” The Translator department had operations on U.S. soil and in cyberspace. In June, two IRA employees, Anna Bogacheva and Aleksandra Krylova, received visas to travel to the U.S. under the false pretense of visiting friends they had met at a party. The pair traveled to the United States to obtain information and photographs that would be used in later social media posts (Mueller, 2019). By 2016, the operations included “supporting the Trump Campaign and disparaging candidate Hillary Clinton” (Mueller, 2019). This also included supporting Clinton’s rival, Senator Bernie Sanders, in the leadup of the Democratic Party presidential primaries. In support of Trump, the IRA purchased over 3,500 advertisements, with a net-worth of more than $100,000, which often promoted pro-Trump rallies (Mueller, 2019).

During the 2016 Russian operations, the IRA gained a large following on multiple platforms and began more complex campaigns including phishing scams targeting federal employees. IRA-controlled accounts on both Facebook and Instagram had hundreds of thousands of U.S. followers. These groups often pretended to have affiliation with U.S. political or grassroots organizations, but either used fictitious groups or non-political entities such as Black Lives Matters protesters (Mueller, 2019). In comparison, IRA controlled Twitter accounts had tens of thousands of followers, including multiple U.S. political figures who retweeted IRA content (Mueller, 2019). Moreover, the misinformation campaign deployed by Russia also included a phishing scam targeting federal personnel. According to a leaked intelligence community report acquired by Time magazine, Moscow sent tailored malware to more than 10,000 Department of Defense employees via Twitter (Calabresi, 2017). The messages contained articles about events that happened the previous weekend, such as the Academy of Motion Picture Arts and Sciences (Oscars) or national sporting events, but when clicked upon, took the users to a Russian-controlled server that downloaded a program allowing hackers from Moscow to take control of the victim’s phone or computer and Twitter account (Calabresi, 2017).

As Trump emerged victorious from the 2016 election, reports surfaced from various media outlets raising concerns over his campaign’s collusion with the Russian government. With pressure mounting, Deputy Attorney General Rod J. Rosenstein — after Attorney General Jeff Sessions recused himself due to ties to Trump’s election campaign — announced the appointment of a Special Counsel headed by former FBI Director Robert S. Mueller III on May 17, 2017. Mueller and his staff had the authority to investigate the “Russian government efforts to influence the 2016 presidential election and related matters” (U.S. Department of Justice, 2017). Attorney General William Barr on March 22, 2019 received the final report from Mueller and the Department of Justice (DOJ) released a redacted version of the 448-page report on April 18, 2019. Although most of what the public knows about foreign influence in the 2016 election comes from the Mueller Report, it does not issue a verdict on whether it changed the outcome of the election. The Mueller Report was focused on finding sufficient evidence of former President Trump coordinating with the Russian government on 2016 election interference, Mueller was unable to find sufficient evidence and therefore no indictment was brought forward.

While some have pointed to the suspected collusion between the Trump campaign and the Russian government in his victory, others have pointed to more conventional blunders, such as Clinton’s lack of campaigning in Michigan or the FBI’s investigation into the personal email systems she used during her time as Secretary of State (FBI National Press Office, 2016). Numerous academic studies have taken place since the election analyzing the effects “fake news” played in the outcome of the election; however, the studies fail to reach a consensus. One study from Stanford University assesses that it is “very unlikely” that fake news swayed the results of the election (Allcott & Gentzkow, 2017). According to the authors, Matthew Gentzkow and Hunt Allcott, if fake news carried the same persuasion rate as a television ad, 0.02 percentage points, this “would have changed vote shares by an amount on the order of hundredths of a percentage point” (2017). The study emphasized that there was no statistical significance of the persuasion rate of the fake news on voters in the U.S. and the statistic found by Gentzkow and Allcott shows that the persuasion impact is much smaller than what Trump won in swing states (2017).

Conversely, researchers from Ohio State University have argued the opposite, that fake news “most likely” swayed the election in Trump’s favor (Gunther et al., 2018). Following the election, they sent out a post-survey nationwide in which they found only 77% of Obama voters ended up voting for Clinton (Gunther et al., 2018). The survey asked recipients to assess their belief in 3 claims that circulated prior to the election; When asked if “Hillary Clinton is in very poor health due to a serious illness” 12% found this to be true, 20% believed she sold weapons to ISIS during her time as Secretary of State, and 8% believed “Pope Francis endorsed Donald Trump prior to the election” (Gunther et al., 2018). The questions posed in the questionnaire were popular misinformation narratives on social media platforms. This study found that disinformation campaigns negatively targeting Clinton impacted voters in the 2016 presidential election.

 Russia maintained its foreign policy on influencing U.S. elections through exploiting the social divides of Americans through social media. On March 16, 2021, the Director of National Intelligence Avril Haines, declassified a report into the 2020 U.S. federal elections entitled “Foreign Threats to the 2020 U.S. Federal Elections.” According to this report, Russia maintained its previous stance in the 2016 election by “denigrating President Biden’s candidacy and the Democratic Party, supporting former President Trump, undermining public confidence in the electoral process, and exacerbating socio-political divisions in the US” (National Intelligence Council, 2021). Moscow believed that a Biden victory would be a “disadvantage” to Russian interests, which views election interference as a way to weaken the U.S. global standing, and to affect future policy decisions (National Intelligence Council, 2021).

The Lakhta Internet Research (LIR), more commonly known by its former name the IRA, began to circulate stories supporting then-President Trump when Biden became the presumptive Democratic nominee in April 2020. The group allegedly received tasking and strategic direction from the Kremlin and amplified domestic controversy issues using social media personas, news websites, and U.S. personas to deliver content (National Intelligence Council, 2021). The LIR established short-lived troll farms that used unknowing third-country nationals in Ghana, Mexico, and Nigeria to spread its false narratives, presumably after some of their accounts were shut down as a response to social media platforms efforts to counteract spread of foreign influence (National Intelligence Council, 2021).

In the 2020 Presidential election, however, Russia was not the only nation-state engaging in influence operations, as U.S. opponents around the globe sought to undermine the nation through the use of social media and the internet. Iran attempted to undercut the Trump administration’s reelection bid and sought to continue “exacerbating divisions in the U.S., creating confusion, and to impact the legitimacy of U.S. elections and institutions” (National Intelligence Council, 2021). Iran’s influence campaigns focused on perceived weaknesses of the United States, which included the response to Covid-19, economic recessions, and social unrest (National Intelligence Council, 2021). The influence campaign included targeting Democratic voters by spoofing emails designed to look like members of the Proud Boys, an extreme far-right group, in multiple states, trying to intimidate them into changing their party affiliation. Iran created and maintained preexisting social media accounts to publish thousands of pieces of content before being shut down by various platforms (National Intelligence Council, 2021) .

One notable exclusion in the report is the absence of the People’s Republic of China (PRC) engaging in IW efforts to influence elections. The Chinese government has primarily applied these techniques and procedures to the state’s population. Despite engaging in trade wars and other hostile conditions with the U.S. during the past four years, the report concludes that China “considered but did not deploy influence efforts intended to change the outcome of the U.S. presidential election” (National Intelligence Council, 2021). The PRC’s application of information operations has been to silence domestic dissent. Since 2008, China has maintained the Golden Shield Project, aka “The Great Firewall of China,” which is one of the world’s most restrictive forms of censorship and security. The Project, operated by the Central Cyberspace Affairs Commission of the People’s Republic of China, blocks internet users from keywords and from accessing many foreign websites including Google, Facebook, and Twitter (Chandel et al., 2019). In addition, the country utilizes a troll farm, dubbed the “50 Cent Army,” that operates as online commentators that are reportedly paid ¥0.50 to praise the Chinese Communist Party and to distract from internal social issues (House Permanent Select Committee on Intelligence, 2019).

China’s reach over cyberspace has moved beyond the nation’s borders and into the global sphere. In a little more than two years, the presence of Chinese diplomats on Twitter was virtually nonexistent but has now reached close to 200 accounts that seek to defend their home country (Brandt & Schafer, 2020). This strategy, known as “wolf warrior diplomacy,” pursues the use of aggressive tactics, such as propagating conflicting conspiracy theories about the origins of coronavirus that are designed to sow chaos and deflect blame (Twigg & Allen, 2021). For example, Zhao Lijian, the deputy director of the Chinese Ministry of Foreign Affairs and is one of the most well-known examples, tweeted articles in March stating that the coronavirus originated in the United States. Those tweets have been shared over 40,000 times and translated into 54 different languages (Twigg & Allen, 2021). Furthermore, researchers at Carnegie Mellon University have combed through more than 200 million tweets discussing the Coronavirus since 2020 and found that about 45% of the accounts behave more like computerized robots than humans (Young, 2020).

Russia, Iran, and China are not the only nation-states engaging in social media warfare, with disinformation campaigns arising in countries across the globe due to their cost effectiveness, simplicity, and influencing abilities. According to a University of Oxford study, organized disinformation campaigns occurred in 70 countries in 2019 and grew from 48 countries in 2018 and 28 countries in 2017 (Bradshaw & Howard, 2019). In addition, Facebook and Twitter attributed seven countries (China, India, Iran, Pakistan, Russia, Saudi Arabia, and Venezuela) that have used their social platforms in order to influence global audiences (Bradshaw & Howard, 2019).

Recently, Mark Zuckerberg of Facebook, Jack Dorsey of Twitter, and other CEOs of social media platforms have taken fire from the Congress and the public to curtail the amount of false information on their platforms. The main actions that the platforms have taken against misinformation is to remove accounts from their sites. In November 2017, a Facebook representative testified to Congress that the platform had identified 470 IRA-controlled Facebook accounts that had collectively made 80,000 posts between January 2015 and August 2017 and estimated that the IRA could have reached as many as 126 million people through its accounts (Mueller, 2019). Following the testimonies in January 2018, Twitter announced that it had identified 3,814 IRA-controlled Twitter accounts and notified approximately 1.4 million people that they may have been in contact with an IRA-controlled account (Mueller, 2019). In 2018, Zuckerberg acknowledged to Congress that Facebook was slow to respond to foreign interference, stating “our sophistication in handling these threats is growing and improving quickly. We will continue working with the government to understand the full extent of Russian interference, and we will do our part not only to ensure the integrity of free and fair elections around the world” (United States Senate Committee on the Judiciary, 2018).

The latest response by the companies to combat misleading information is to implement the practice of fact-checking into user’s posts. Following the 2016 election, Facebook hired third-party fact-checkers to combat disinformation. In addition, Twitter and Facebook added warning labels to all posts about voting that directed users to authoritative information from state and local election officials, including posts by the president in June 2020. Following the January 6 insurrection at the Capitol and misinformation about COVID-19 virus, the Senate held hearings on the responsibility of social media platforms (Bond, 2021). Since the 2016 election, Facebook, Twitter, and Google have increased the number of banned accounts and expanded the measures for content being banned (Bond, 2021).

Conclusion

From analyzing the relationship between IW and social media through chronology and applying ICD 203, this analyst concludes that there is a 95-99% probability that the use of disinformation and other acts of social media by state actors are going to continue to be a mainstay in U.S. elections and politics. Whether foreign intervention had any impact on U.S. elections remains inconclusive and data, in some cases, is incomplete. However, what is important to state actors is the perception that social media is an extremely effective tool in changing outcomes to their favor. Russia has perceived victory in the annexation of Crimea, in Brexit, and the 2016 U.S. presidential election and continued to have success in sowing discord within the United States. As long as the assumption of success remains, countries are going to utilize social media in their favor moving forward. Additionally, there is no conventional downside for state and non-state actors in implementing social media manipulation as a means of warfare. The platforms have minimal costs to use, and the locations of the users can be spoofed to supply the perpetrators with some level of plausible deniability. Users of social media are going to have to remain vigilant and the U.S. government should cooperate with the social media platforms to increase combating efforts if they want to dissuade nation-states from attempting to engage in influence operations in the future.

References

Allcott, Hunt, & Gentzkow, Matthew. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, vol. 31, no. 2, 2017, pp. 211-235. Retrieved from http://web.stanford.e‌du/~gentzkow/research/fakenews.pdf.

Bates, Stephen. (2010). Disinforming the world: Operation INFEKTION. The Wilson Quarterly, vol. 34, no. 2, 2010, pp. 13. Retrieved from link. gale.com/apps/doc/A2‌24989880/OVI‌C?u=ycp_mai‌n&sid=‌OVIC&xid=7005b577.

BBC. (n.d.). EU Referendum: Results. Retrieved from https://www.bbc.com/news/politic‌s/eu_referendum/results.

Bond, S. (2021). Facebook, Twitter, and Google CEOs Testify Before Congress. NPR. Retrieved from https://www.npr.org/2021/03/25/980510388/facebook

-twitter-google-ceos-testify-before-congress-4-things-to-know.

Bossetta, Michael. (2018). The Weaponization of Social Media: Spear Phishing and Cyberattacks on Democracy. Journal of International Affairs, vol. 71, no. 1.5, 2018, pp. 97-106. Retrieved from https://ji‌a.sipa.columbia.edu/weaponization-

social-media-spear-phishing-and-cyberattacks-democracy.

Bradshaw, Samantha & Howard, Philip. (2019). The Global Disinformation Order: 2019 Global Inventory of Organized Social Media Manipulation [PDF file]. University of Oxford. Retrieved from https://demtech.oii.ox.ac.uk/wp-content/uploads/sites/

93/‌2019/0‌9/CyberTroop-Report19.pdf.

Brandt, Jessica & Schafer, Bret. (2020). How China’s ‘Wolf Warrior’ Diplomats Use and Abuse Twitter. Brookings Institution. Retrieved from https://www.brookings.edu/te‌chstre‌am/how-chinas-wolf-warrior-diplomats-use-and-abuse-twitter/.

Calabresi, Massimo. (2017). Inside Russia’s Social Media War on America. Time. Retrieved form https://time.com/4783932/inside-russia-social-media

-war-america/.

Chandel, Sonali, Zang Jingji, Yu Yunnan, Sun Jingyao. (2019). The Golden Shield Project of China: A Decade Later—An In-Depth Study of the Great Firewall, IEEE, 2019, doi:10.1109/CyberC.2019.00027 Retrieved from https://www.researc‌hgate.net/publication/338361425_The_Golde‌n_Shield_Project‌_of_China_A_Dec‌ade_Later-An_in-Depth_Study_of_the_Great‌_Firewall.

Daisley, Stephen. (2020). Why Putin Wants Scottish Independence. The Spectator. Retrieved from https://www.spectator.co.uk/article/why-putin-wants-scottish-independence.

Dizikes, Peter. (2018). Study: On Twitter, False News Travels Faster Than True Stories. Massachusetts Institute of Technology. Retrieved form https://news.mit.edu/2018‌/study-twitter-false-news-travels-faster-true-stories-0308.

FBI National Press Office. (2016). Statement by FBI Director James B. Comey on the Investigation of Secretary Hillary Clinton’s Use of a Personal E-Mail System. Retrieved from https://www.fbi.gov/news/pressrel/press-releases/statement-by

-fbi-director-james-b-comey-on-the-investigation-of-secretary-hillary-clinton201use-of-a-personal-e-mail-system.

Fredrick, Kara. (2019). The New War of Ideas: Counterterrorism Lessons for the Digital Disinformation Fight. Center for a New American Security. Retrieved from https://www.jstor.org/stable/resrep20399?seq=1#metadata_info_tab_contents.

Government of United Kingdom. (n.d.). Scottish Independence Referendum. Retrieved form https://www.gov.uk/government/topical-events/scottish-independence

-referendum/about#why-did-it-happen.

Greenemeier, Larry. (2011). Seeking Address: Why Cyber Attacks Are So Difficult to Trace Back to Hackers. Scientific America. Retrieved from https://www.scientifica‌merica‌n.com/article/tracking-cyber-hackers/.

Gunther, Richard, Paul Beck, & Erik Nisbet. (2018). Fake News May Have Contributed to Trump’s 2016 Victory [PDF file]. Ohio State University. Retrieved from https://www.docu‌mentcloud.org/documents/4429952-Fake-News-May-Have-Contributed-to-Trump-s-2016.html.

House Permanent Select Committee on Intelligence. (2019). China’s Digital Authoritarianism: Surveillance, Influence, and Political Control [PDF file]. Retrieved from https://www.congress.gov/116/meeting/house/109462/documents‌

/HHRG-116-IG00-MState-S001150-20190516.

Kaiser, L. , & Pherson, R. (2014). Analytic Writing Guide. Pherson Associates.

Mueller, Robert. (2019). Report on The Investigation into Russian Interference in the 2016 Presidential Election. U.S. Department of Justice. Retrieved from https://www.justice.gov/archives/sco/file/1373816/download?fbclid=IwAR00oAhvMgzokRRUw8o8QcVjKGgydgfZxbgRcGZaxi37JXM3Gm3GhpBsNWA.

Nakashima, Ellen. (2017). Inside a Russian Disinformation Campaign in Ukraine in 2014. The Washington Post. Retrieved from https://www.washingtonpost.com/world/nat‌ional-security/inside-a-russian-disinformation-campaign-in-ukraine-in-2014/2017/12/25/f55b0408-e71d-11e7-ab50-621fe0588340_story.html.

National Intelligence Council. (2021). Foreign Threats to the 2020 US Federal Election [PDF file]. Retrieved from https://www.dni.gov/files/ODNI/documents/

assessment‌s/ICA-declass-16MAR21.pdf.

Nemr, Christina, & Gangware, William. (2019). Weapons of Mass Distractions: Foreign State-Sponsored Disinformation in the Digital Age. Park Advisors. Retrieved from https://www.state.gov/wp-content/uploads/2019/05/Weapons-of-Mass-Distraction-Foreign-State-Sponsored-Disinformation-in-the-Digital-Age.pdf.

Office of the Director of National Intelligence. (2007). Intelligence Community Directive 203: Analytic Standards [PDF file]. Retrieved from https://fas.org/irp/dni/icd/icd-203.pdf.

Paavola, J, Helo, T, Jalonen, H, Sartonen, M, Huhtinen, A-M. (2016). Understanding the Trolling Phenomenon: The Automated Detection of Bots and Cyborgs in the Social Media. Journal of Information Warfare. Retrieved from https://www.jstor.or‌g/stable/2648‌7554?seq=1#metadata_info_tab_contents.

Pew Research Center. (2021). Social Media Fact Sheet. Retrieved from https://www.pewr‌esearch.org/internet/fact-sheet/social-media/#who-uses-each-social-media-platform.

Statista. (2020). Number of social network users worldwide from 2017 to 2025. Retrieved from https://www.statista.com/statistics/278414/number-of-

worldwide-social-network-users/.

U.S. Department of Justice. (2017). Appointment of Special Counsel. Retrieved from https://www.justice.gov/opa/pr/appointment-special-counsel.

U.S. Senate Committee on the Judiciary. (2018). Testimony of Mark Zuckerberg [PDF file]. Retrieved from https://ww‌w.judiciary.senate.gov/imo/media/doc/04-10-

18%20Zuckerberg%20Testimony.pdf.

U.S. Senate Select Committee on Intelligence. (2019). Russian Active Measures Campaigns and Interference in the 2016 U.S. Election Volume 2: Russia’s use of Social Media [PDF file]. Retrieved form https://games-cdn.washingtonpost.com/n‌otes/prod/def‌ault/documents/db388e88-fb27-4bd0-a674-13c71f2ff125/note/d826‌70fe-4762-40f9-a698-7da10f0c799c.pdf.

Twigg, Krassi & Allen, Kerry. (2021). The Disinformation Tactics Used by China. BBC. Retrieved from https://www.bbc.com/news/56364952.

University of Washington. (2017). Countering Disinformation: Russia’s Infowar in Ukraine. Retrieved from https://jsis.washington.edu/news/russia

-disinformation-ukraine/.

Varol, Onur, Emilio Ferrara, Clayton A. Davis, Filippo Menczer, Alessandro Flammini. (2017). Online Human-Bot Interactions: Detection, Estimation, and Characterization [PDF file]. Retrieved from https://arxiv.org/pdf/1703.03107.pdf.

Ward, Antonia. (2018). ISIS’s Use of Social Media Still Poses a Threat to Stability in the Middle East and Africa. RAND Corporation. Retrieved from https://www.rand.org/‌blog/2018/12/isiss-use-of-social-media-still-poses-a-threat-to-stability.html.

Young, Virginia. (2020). Nearly Half of the Twitter Accounts Discussing ‘Reopening America’ May Be Bots. Carnegie Mellon University. Retrieved from https://www.c‌mu.edu‌/news/stories/archives/2020/may/twitter-bot-campaign.html.