Skip to main content

April 21, 2024

Joseph Al-Adam and Fauzan Amjad

Foreign Usage of AI: Implications on American National Security and Information Warfare

 

Introduction and Background Information

In the world of military strategy and operations, artificial intelligence (AI) has emerged as a powerful force, integrating a multitude of advanced technologies to increase our capabilities across a variety of disciplines. AI is defined as a branch of science and technology that creates intelligent machines and computer programs to perform tasks historically requiring human intelligence, and within military contexts, AI finds multifaceted applications, extending from defense systems to cyber warfare and surveillance initiatives. 

Source: South China Morning Post

A notable example would be the State of Israel’s deployment of the Iron Dome, which is a paradigmatic example of AI integration in military defense. The Iron Dome is an anti-missile platform operating nationwide, and it leverages AI algorithms to intercept incoming missiles by predicting their trajectory and launching countermeasures. While the Iron Dome does illustrate the efficacy of AI in what the State of Israel considers threats, the system occasionally falters when trying to ensure absolute interception accuracy, displaying that there is a nuanced interplay between technological advancement and operational challenges.

Beyond conventional defense mechanisms, AI plays an important role in surveillance operations, with a notable example being China’s expansive employment of mass surveillance technologies. China’s surveillance apparatus, which includes extensive data collection systems and integration of AI algorithms, allows for comprehensive monitoring and analysis of even the most basic societal activities, assisting law enforcement and governance objectives. This pervasive use of surveillance technologies in China does, however, raise concerns about privacy infringements and authoritarian control, emphasizing the ethical and societal implications inherent in the militarization of AI-driven surveillance. Within the domain of cyber warfare, Iran has quickly advanced its capabilities by strategically leveraging cyber operations as asymmetric tools for coercion and strategic influence. Iran’s cyber strategy is characterized by retaliatory actions and a focus on challenging U.S. presence in the region and encompasses a spectrum of activities orchestrated by key military organizations. Thus, the combination of AI with military operations possesses multifaceted implications, reflecting both the transformative potential and ethical complexities inherent in the pursuit of technological superiority within the contemporary strategic landscape.

Information Warfare

What is information warfare? Information warfare (IW) is defined as “an operation conducted in order to gain an information advantage over the opponent.” These operations manifest in various forms, including the utilization of artificial intelligence (AI), Deceptive Imagery Persuasion (DIP), Deepfakes, among others. Notably, DIP, a concept introduced by Ryan McBeth, is gaining prominence in modern warfare tactics. DIP is operationalized through the dissemination of content that is posted on social media platforms that can be deceptive visually. Examples of DIP can be images or videos that are manipulated to propagate a false narrative. Instances of DIP can significantly impact public perception and potentially destabilize societal consensus on global events. An illustrative case occurred on March 1, 2022, a Pro-Kremlin Twitter (now X) user named “Ne_nu_Che” posted a video of a news report showing body bags that were supposedly filled with corpses. These corpses draped in the body bag were meant to showcase Ukrainian citizens’ casualties. The video, however, shows a person within the body bags grabbing the top of the bag to stop it from blowing away. This led the Twitter user to say that Ukraine was falsifying civilian casualties. However, closer scrutiny revealed that the footage originated from an Austrian news report discussing climate change. Operations that use DIP in IW can be conducted by many states or non-state actors. The previous example highlighted Russia’s use of the DIP, but other nations such as China, Iran, and North Korea have the capabilities to deploy such tactics. The implications and observation of DIP in IW can be seen on social media platforms such as Instagram, Facebook, Twitter, and Telegram. Such disinformation campaigns orchestrated through DIP pose a grave threat, as they risk widespread acceptance of false narratives, potentially undermining democratic processes.

This form of modern warfare seen through IW exacerbates polarization within the United States, which can debilitate and impair our political processes, election comprehension, and perpetuate the dissemination of misinformation. Ne_nu_Che’s dissemination of false information exemplifies a broader trend, as numerous social media accounts—affiliated with state or non-state actors—engage in similar propagandistic activities, constituting what is commonly referred to as “Troll Farms.” These are farms of fake social media accounts that are specialized for one purpose, the spreading of false narratives and propaganda. These organized entities, whether backed by state actors or non-state actors, are becoming more apparent in our day to day lives by trying to spread disinformation on commonly used social platforms, thus further propagating IW campaigns by various actors. For instance, during the 2016 U.S. elections, the “Internet Research Agency,” (IRA) was accused of meddling in the elections by spreading false narratives and messages. Further research discovered that the IRA was linked to former CEO and commander of the Private Military Company (PMC) Wagner Group, Yevgeny Prigozhin. Prigozhin allegedly meddled in the elections by spreading false narratives through these farms. At the time, Prigozhin was a close ally of the Russian Federation President Vladimir Putin. This creates a very serious problem as third party groups that are backed by state or non-state actors have the ability and funding to conduct IW campaigns. They do this as well by giving the state or non-state actor plausible deniability. This can make combating IW even harder as any soft power enacted upon by the U.S. government will only reach the third party, and not the source. Evidence suggests that such entities persist in their efforts to manipulate public opinion on social media platforms and major media outlets without much repercussions. Regarding topics such as the Russia-Ukraine war, these entities have been flourishing in spreading their disinformation. Their narratives, disseminated across platforms like Instagram, Facebook, TikTok, and Telegram, tend to favor Russia and can mislead unsuspecting users. As we progress into the 21st century, the ramifications of information warfare will only exacerbate democratic erosion and further the disintegration of the American consensus on national security interests.

The Disintegration of the American Consensus on National Security Interests

The rise of Information Warfare (IW) poses a significant challenge to American national security by destabilizing consensus on crucial geopolitical issues. IW’s primary aim is to distort public perception of global events, which can debilitate the United States system on identifying and garnering public support for various geopolitical topics. Examples provided earlier highlight IW’s capacity to sway public opinion. Moreover, the world has seen an increase in conflicts and disputes across all continents. This rise in conflict has ushered in another state of crisis stability. Akin to the crisis stability seen in the Cold War, we are also seeing the competitive dynamics of the Cold War resurface. In this contemporary landscape, conflicting parties vie to manipulate societal perceptions, mirroring Cold War strategies of outmaneuvering adversaries. Since the Cold War did not go hot and nor should modern conflicts go hot, we are seeing nations reluctant to open warfare and engaging in rather covert means. This outmaneuvering of adversaries is conducted through IW with either side trying to influence and change the other side’s societal perception on current issues to gain an advantage.

The pervasive reach of the internet facilitates the rapid spread of disinformation and propaganda, endangering democracies. In the context of the United States’ democratic principles, where citizen participation shapes national direction, IW plays a crucial role in shaping public opinion on conflicts like the Russia-Ukraine War and the Israel-Palestine dispute. Disinformation has the possibility to spread rapidly like a wildfire. If IW remains persistent in our society without legislative action or decisive decisions, it will be given unwarranted power and influence over our democracy and will deteriorate and erode it. This effect of IW extends to legislative impasses, political polarization, and governance challenges. IW is also empowered through our political disagreements, as “Troll Farms” will capitalize off of these issues to try to skew the conversation in a way that is desirable to a foreign actor. This can be seen as aid for Ukraine being halted in the U.S. House of Representatives. The potential for catastrophic consequences is considerable in a world where social media and online platforms dominate information dissemination. To counter these threats, comprehensive legislative measures are essential to safeguard against foreign interference and uphold informed decision-making.

Source: Sensity AI

Future Developments

The rapid advancement of deepfake technology, a strong combination of the deep learning and artificial intelligence disciplines, creates a multifaceted challenge with profound implications on national security, especially with drawbacks relating to foreign interference. Deepfake technology implements sophisticated algorithms to seamlessly manipulate audio and video media, thus enabling a variety of malicious activities like defamation, political manipulation, dissemination of fake news, and undermining democratic processes. The detection of deepfake media is a formidable task; however, advancement in current methodologies, such as analyzing the blinking patterns, detecting unnatural movements, and comparing alterations in lighting, are promising avenues to show detection is possible. Countermeasures to the challenges posed by deep fake technology include a spectrum of strategies ranging from anti-deepfake technologies to legislative frameworks and educational initiatives. Current research endeavors include new innovations in the field of computer science like AI watermarking and neural network analysis, which is aimed at increasing the accuracy of these detection techniques. Nonetheless, the intrinsic complexity of deepfake technology underscores the challenges associated with detection and mitigation efforts. In Arijie (2019) for example, it makes clear that deepfakes can be exploited by terrorists to propagate propaganda, fostering societal destabilization and polarization. This could ultimately threaten political stability and democratic institutions. With that, it is important to address the threat of deep fakes at the federal level, necessitating substantial investment in research grants to bolster detection capabilities, develop robust countermeasures, and secure societal resilience against the harmful impacts of this emergent technology. Collaboration among governmental bodies, private sector entities, and academic institutions is essential to navigate the intricate landscape of deep fake threats effectively, safeguarding national security and democratic integrity.

Conclusion and Outlook

The interplay between artificial intelligence (AI) and military operations presents a complex landscape marked by transformative potential and ethical considerations. As demonstrated by the deployment of the Iron Dome and China’s mass surveillance technologies, AI integration in defense and surveillance operations exemplifies the evolution of military capabilities. However, challenges such as operational limitations and ethical concerns underscore the nuanced nature of AI-driven militarization. Moreover, the emergence of Information Warfare (IW) introduces a new dimension of conflict, characterized by the dissemination of disinformation and propaganda to manipulate public perception and undermine democratic processes. The case of deceptive imagery persuasion (DIP) illustrates the grave implications of IW on societal consensus and political stability.

As we navigate the contemporary strategic landscape, it becomes evident that future developments in technology, such as deepfake technology, pose multifaceted challenges to national security and democratic integrity. The potential for deepfakes to propagate misinformation and manipulate public opinion underscores the imperative for robust detection mechanisms and countermeasures. Addressing these challenges requires a comprehensive approach, encompassing technological innovation, legislative frameworks, and collaborative efforts across governmental, private, and academic sectors.

Moving forward, it is essential to prioritize investment in research and development to enhance AI capabilities, fortify cybersecurity defenses, and mitigate the risks associated with IW and emerging technologies like deepfakes. Moreover, fostering international cooperation and dialogue is crucial in establishing norms and regulations governing the ethical use of AI in military operations and combating disinformation campaigns. By addressing these challenges proactively, we can strive towards a more secure and resilient strategic landscape, safeguarding democratic principles and promoting stability in an increasingly complex world.

Bibliography

Antinori, Arije. “ECIAIR 2019 European Conference on the Impact of Artificial Intelligence and Robotics.” Google Books, November 1, 2019. https://books.google.com/bookshl=en&lr=&id=8MXBDwAAQBAJ&oi=fnd&pg=PA23&dq=deepfake+military&ots=hDPjlI381j&sig=EpGg_59dLgEAHEPyze-0EqGZU3E#v=onepage&q=deepfake%20military&f=false.

Aro, J., S. Macdonald, T. Thomas, and R. Thornton. “What is information warfare?” Edited by J.D. Ohlin, K. Govern, and C.O. Finkelstein. MEDIA-(DIS)INFORMATION-SECURITY, 2021. https://www.nato.int/nato_static_fl2014/assets/pdf/2020/5/pdf/2005-deepportal4-information-warfare.pdf.

Fetzer, James H. “What is Artificial Intelligence?” Artificial Intelligence: Its Scope and Limits, n.d.

Kao, Craig SilvermanJeff. “Infamous Russian Troll Farm Appears to Be Source of Anti-Ukraine Propaganda.” ProPublica, April 25, 2022. https://www.propublica.org/article/infamous-russian-troll-farm-appears-to-be-source-of-anti-ukraine-propaganda.

Morgan, Forrest E., Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima, Derek Grossman, and RAND Corporation. Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World. RAND Corporation, 2020. https://www.rand.org/content/dam/rand/pubs/research_reports/RR3100/RR3139-1/RAND_RR3139-1.pdf.

Reporter, Guardian Staff. “‘Troll Factory’ Spreading Russian Pro-war Lies Online, Says UK.” The Guardian, July 19, 2023. https://www.theguardian.com/world/2022/may/01/troll-factory-spreading-russian-pro-war-lies-online-says-uk.

Van Der Merwe, Joanna. “Iron Dome Shows AI’s Risks and Rewards.” CEPA, March 22, 2024. https://cepa.org/article/iron-dome-shows-ais-risks-and-rewards/.

McBeth, Ryan. “Newsmax Media.” Newsmax, n.d. https://www.newsmax.com/.