Her Campus Logo Her Campus Logo
Culture

The Future of Artificial Intelligence: Autonomous Weapons Systems Impact on Terrorism

This article is written by a student writer from the Her Campus at Clark chapter.

The surge of contemporary technology would eventually raise a threatening issue, whether or not humans choose to acknowledge the power of artificial intelligence (AI) or not. As one of the most pressing political issues, artificial intelligence has been deemed the third revolution of war succeeding gunpowder and Nuclear weaponry. In an open letter announced at the opening of the International Joint Conference on Artificial Intelligence (IJCA) in 2015, notably signed by over 1,000 professional AI experts and researchers, it is stated:

 

The endpoint of this technological trajectory is obvious: autonomous weapons will 

become the Kalashnikovs of tomorrow. The key question for humanity today is whether 

to start a global AI arms race or to prevent it from starting. 

 

Even highly regarded professors, educators, developers, and researchers have come forth to deny the alleged long-term benefits of AI in society and especially the ill, dehumanizing influence it will have on battlefield bloodshed. Thus, to ban all lethal autonomous weapons, coined “killer robots,” it is pertinent to prove the risk that AI and lethal autonomous weapons (LAWS) pose a greater threat than benefit regarding how international security partakes in daily life completely unnoticed.

 

The legal definitions of AI, LAWS, and UAVs are majorly umbrella terms for a wide range of technologies attaining similar traits to the main category. With research and development into such technology will eventually form a precise, detailed definition for each umbrella term. Currently, however, the definitions encompass a wide range of technological forms that infiltrate daily life. For example, AI is defined as “software that equips a computerized system (e.g. a robot) with some, usually very specific, human-like capabilities such as pattern recognition, text parsing and planning/problem-solving,” (Krishnan p. 5). Similarly, LAWS cover a wide spectrum of potential weapons systems that encompass technologies such as “fully autonomous weapons that can launch attacks without any human involvement,” and or “semi-autonomous weapons that require affirmative human action to execute a mission,” (Evans & Salmanowitz 2019). Additionally, one of the largest threats to international security is UAVs, dubbed drones, are “aircraft that can be controlled remotely by a pilot or can fly autonomously based on preprogrammed plans or automation systems,” (Ohio Uni. 2019). 

 

The influence of readily available AI weapons systems on terrorist groups unarguably defends the necessity of a ban on such weaponry systems; the high-tech form of these weapons allows for greater international collateral damage resulting from terrorist attacks. There have been both militant and terrorist groups that acclimate autonomous weaponry to their individualized purposes. Alvin Wilby, the vice president of research at Thales, a French defense that supplies the British Army with drones, has also acknowledged, to the House of Lords AI Committee that terrorists and rogue states, ““will get their hands on lethal artificial intelligence in the very near future,” (Marr 2018).  For example, in 2017, “ISIS weaponized. . .drones with grenades in the battle for Moul to retake a city from opposing forces,” (Liset 2019). The procurement of autonomous weapons makes such weapons especially accessible to terrorist groups, increasing the threat of socially constructed criminal offenses. These criminal offenses include remote controlled car bombings that directly influence the civilian public via kidnapping cases and or autonomous vehicles.  

 

Additionally, Terrorist groups have manipulated various forms of technology, including encryption technology and various social media platforms, to catalyze their plans. Therefore, it is not a farfetched notion to predict that AI weapons systems will falter to terrorism through their manipulation and accessibility to them in the nearing future. However, terrorist manipulation on technology does not stop there: 

 

The possible ways by which the AI technology can be hijacked and harnessed by 

terrorists is only limited by the power of imagination, and counterterrorism measures 

need to be constantly enforced for the safety of lives and properties. (Liset 2019) 

 

While there are benefits to the use of AI weapons systems in counterterrorism measures, the development of AI weapons systems is still in its earliest stages of research and investigation into their fullest-weaponry-potential remains in adolescence. 

 

However, though attempted counterterrorism attacks have been successful, the risks of autonomous weapons outweigh all reward; international security remains vulnerable through all mysteries in the AI field. The United States Department of Defense, as well as authorities and or officials in the United States Air Force, desire to open the discussion of threats made, even in counterterrorism, to international security. The superior threat of AI weapons in terrorism is the upheaval in UAV swarms. Mike Griffin, an aerospace engineer employed by the US Department of Defense, questions “human-directed weapons systems can deal with one or two or a few drones if they see them coming, but can they deal with 103?” (Stroud 2018). Griffin examines the threat to international security that a surge in drones will produce, even at locations as secure as the Pentagon. 

 

‘There’s no provable, optimal scheme’ for defending against such swarms, Griffin said. 

So, the Pentagon might, as a result, have to build only “a pretty good scheme.” 

Otherwise, the enemy will have an uncontested shot at succeeding. (Stroud 2018) 

 

Even in an area as highly trained and secured as the Pentagon, the imposition of threat due to terrorist manipulation, more specifically hacking or an influx of drones used to execute plans, resides inherently within the development and use of AI weapons systems worldwide. Taking into consideration the underdevelopment of the defensive forms (of drone swarms), the offensive side obtains the superior destruction abilities posing the greater risk of lethal attacks such as bombings. Thus, without a ban on AI weapons systems, terrorist groups will take advantage of humanity’s morality and perhaps influence a greater cause leading to a third arms race worldwide, such as the invention and war of Nuclear weaponry or gunpowder. 

 

Ultimately, the best thing for the safety of civilian life, on a global scale, is the ban of LAWS. These autonomous weapons systems pose a threat to civil, mannered life that respects the aerial and privacy laws (within the U.S. specifically). Whether or not the general public is aware, these technological developments will inevitably, as they have begun to affect all walks of life whether or not people involved or associate themselves with these technologies. Currently the United Nations is attempting to ban such weaponry and the domestic use of it, or they at least hope to implement some type of government intervention and or authority. In the U.S., the government is in favor of further developing these technologies through international collaboration, yet civilians are left to contemplate what these weapons will cause in terms of a third arms race– worldwide at that.  

 

Resources and Works Cited: 

 

Etzioni, Amitai, and Oren Etzioni. “Pros and Cons of Autonomous Weapons Systems.” Army

 University Press, 2017,    www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archive….

Evans, Haley, and Natalie Salmanowitz. “Lethal Autonomous Weapons Systems: Recent 

Developments.” Lawfare, The Lawfare Institute, 11 Mar. 2019, www.lawfareblog.com/lethal-autonomous-weapons-systems-recent-developments.

International Joint Conference on Artificial Intelligence. “Open Letter on Autonomous 

Weapons.” Future of Life Institute, 28 July 2015, 

futureoflife.org/open-letter-autonomous-weapons/.

Krishnan, A. (2009). Killer Robots. London: Routledge, https://doi.org/10.4324/9781315591070

Liset, Victoria. “Usage of Artificial Intelligence in Terrorism and Counterterrorism -.” AI TECH

REPORTER, 10 Feb. 2019,

aitechreporter.com/2019/02/10/usage-of-artificial-intelligence-in-terrorism-and-counterte

rrorism/.

Marr, Bernard. “Weaponizing Artificial Intelligence: The Scary Prospect Of AI-Enabled 

Terrorism.” Forbes, Forbes Magazine, 23 Apr. 2018, 

www.forbes.com/sites/bernardmarr/2018/04/23/weaponizing-artificial-intel…

“Recaps of the UN CCW Meetings March 25-29, 2019.” Ban Lethal Autonomous Weapons, 

OnePress through Framework, 2 Apr. 2019, 

autonomousweapons.org/recaps-of-the-un-ccw-meetings-march-25-29-2019/.

Stroud, Matt. “The Pentagon Is Getting Serious about AI Weapons.” The Verge, The Verge, 12 

Apr. 2018, 

www.theverge.com/2018/4/12/17229150/pentagon-project-maven-ai-google-war…

“The Pros and Cons of Unmanned Aerial Vehicles (UAVs).” Ohio University, 3 Jan. 2019,    

onlinemasters.ohio.edu/blog/the-pros-and-cons-of-unmanned-aerial-vehicles-uavs/.

 

Monica Sager is a freelance writer from Clark University, where she is pursuing a double major in psychology and self-designed journalism with a minor in English. She wants to become an investigative journalist to combat and highlight humanitarian issues. Monica has previously been published in The Pottstown Mercury, The Week UK, Worcester Telegram and Gazette and even The Boston Globe. Read more of Monica’s previous work on her Twitter @MonicaSager3.