Posted on

By

The Growing Concern of Lethal Autonomous Weapon Systems

Content Warning: This paper discusses sexual violence.

LAWS, or Lethal Autonomous Weapon Systems is a growing concern within not just the United States, but worldwide. With “more than 40 countries — including the United States, Great Britain, Russia and China” leading the charge as they are “developing a new generation of robotic weapons that can be programmed to seek out and destroy enemy targets without direct human control” (McGlynn). LAWS are not too dissimilar from consumer or commercial drone systems, but do have a unique feature set, weapons. Some LAWS may be small, intended for one time use against a single target, and others may be as large as a plane or ship. While we like to think that AI in general is not matured due to our exposure at the consumer level, “a Turkish-made drone known as Kargu-2 was used in Libya’s civil war in 2020,” (Knight) which suggests that we are well advanced past Siri not knowing what song you want to hear. In fact, a more egregious use was when Israel used an “AI-assisted weapon” (Knight) in the assassination of an Iranian nuclear scientist. The opponents of weaponized AI argue that it is unethical to allow a machine to make life and death decisions. Proponents claim that these revolutionary systems will change warfare and humanity for the better. I argue that LAWS must be embraced wholly by all nations, and that all nations will be open and transparent about the rules which govern their uses as this will be the only way that humanity can avoid the dangers in regard to new weapon developments as we have seen in the past.

Most people have seen the Terminator or Matrix movies or are at least familiar with these concepts. In fact, both Terminator and Matrix are concepts far older than those films, which borrow the age-old trope of killer robots. While we appear to be far removed from either of those realities at the current time, scientists and engineers are making strides to bring us closer to a future where robots fight for us, and not necessarily against us. But I suppose the “us” is subjective, depending on which side you’re on – the side making the LAWS, or the side receiving their attacks. Given the varied economic and military strength of nations globally, it is safe to assume that countries with higher GDP and technological capabilities will be the first to employ LAWS. Unlike nuclear weapons, LAWS do not require complex refinement of rare earth metals or extremely precise and complex detonation systems required to split atoms. This opens the door for all countries, and beyond, to start developing LAWS. Some will argue that we should stop or prohibit development of LAWS, but LAWS is the natural progression of military weaponry using technology, and as such, will not be stopped. Just as there was a race to develop the first atomic bomb with many countries working towards their own atomic bombs. The future of LAWS will be one that requires strict regulation, transparency, and accountability; the citizenry must have full disclosure of events surrounding these machines, what their orders are, and why they are in use.

The fact that “LAWS have existed for decades in the form of close-in weapon systems and missile defense systems,” (Coyne and Alshamy 190) which are limited range defensive systems, shows that it is possible to build LAWS and have them be used ethically. When looking back on previous wars and conflicts, one can see that soldiers “killed many people who might have been insurgents but proved not to be” (Calhoun 183). A staggering majority of soldiers and marines thought non-combatants should not be treated with respect of dignity, and most would not report their fellow soldiers for misconduct of innocent non-combatants (Wood 223). Soldiers who are angry are “7 times more likely” (Wood 224) to commit violence against innocent non-combatants. Another oft under reported and taboo subject is that of rape during war of innocent civilian women and soliders. While “It is difficult to establish accurate statistics on the incidence of rape in conflict” it is estimated that “25 – 50,000 Bosnian women and girls were raped in the Bosnian war,” and somewhere “between 250,000 and half a million” during the Rwanda genocide (Fiske and Shackel 124). Because “war and rape of women and girls is irrefutable and well established” (Fiske and Shackel 125), this begs the question, why do we only consider some aspects of LAWS and not all? It is clear that war affects more than the soldiers, but also the civilians, maybe even more so, simply because of their passive role and limited power to stop aggression towards them.

Most scholars and pundits seem to be against LAWS citing ethical reasons. One of the consequences of allowing LAWS is “lowering the cost of conflict” (Coyne and Alshamy 190) which could make it cheaper, both monetarily and psychologically, to wage war. There are also concerns that LAWS could “kill anyone anywhere” (Calhoun 187). Opponents also cite many possible technical scenarios in which a LAWS could malfunction or be commandeered by an enemy. I will not speak to the technical scenarios, as those are speculative at best. Instead, I’ll be evaluating the ethics surrounding LAWS. One of the greatest problems with ethics is the subjective nature of it. The PLA, China’s principle military force, must “obey the Party” regarding ethics and training; the Party being the CCP. This is problematic because “the PLA is unwilling to publish any materials that may provide potential adversaries insights into their specific considerations and plans for the use of new technologies” as information is seen as state secrets (Metcalf 1). With the PLA “intensely involved in applying AI” (Metcalf 2) to their military, one can see how a lack of formal ethics direction or insight into how LAWS should be employed, if it all, can raise an eyebrow.

A ban on LAWS is often proposed by opponents, yet it seems increasingly unlikely that this will occur, especially given the number of LAWS already in service at this time. The U.S. uses LAWS for defense, namely the Patriot and Phalanx systems. Israel has their Harpy which has a “very specific task of engaging radar signatures” (Sauer 240), similar to the Patriot and Phalanx systems. The potential for complete automated or autonomous systems is too great for the current world powers to ignore. This is why instead of proposing a ban, something that will never come to fruition, there should be a push for more rules and transparency. When China refuses to take a stance on LAWS because they feel it is a losing strategy to disclose ethics surrounding AI, this is troublesome. In the book Leviathan by Thomas Hobbes he outlines that war is a natural state and that preparation for war is just and ethical. With so many countries working towards the development of LAWS, it is only natural that all countries take on this endeavor and develop their own LAWS. As more countries join in on this, there will be more events in which LAWS are used and more questions. These questions will demand answers, and hopefully steer us towards a framework for a doctrine that the world agrees on which will dictate how and when LAWS can be used and to what capabilities they should possess.

While ethics are subjective, I lean towards the Golden Rule whenever possible. The Golden Rule is that one should try and help others even if there’s a risk to themselves. However, in times of war, it is hard to apply this type of morality. Instead, the Double Effect takes priority as justification for hurting others, such as civilians, in defense of oneself. I believe that LAWS could be employed and display both the Golden Rule and the Double Effect when operating. As we saw, rape is a prevalent occurrence in war zones. This horror of war, along with so many others such as mass killing, and plundering and pillaging, could entirely be avoided if LAWS were to be employed into contested areas or war zones instead of human soldiers. Soldiers who operate current-day drones, armed or otherwise, rely heavily on computer systems and AI for flight, target identification, and ballistic calculations. All these systems work on probabilities, much like the human brain. The difference is AI is far superior at pattern recognition in large amounts of data, they follow rules every time all the time, and they do not hesitate. The only difference between a LAWS and an armed drone is the human pushing the button instead of programming the LAWS on when it should push the button itself.

Science and engineering, or technology, moves in an iterative process, constantly building on a global community of information and knowledge. At one time knowledge of super weapons such as the atomic bomb were considered top secret. Now anyone can learn and understand how they are built and function. This is the same for drones and weaponized drones or LAWS. Additive manufacturing can now be done in the comfort of your home with inexpensive 3D printers, and electronics to build a drone are the same as one might use for most radio-controlled vehicles. Couple these with some computing components, attach your weapon of choice, and you have a LAWS. Of course, the AI framework is missing, but with open source software and a world hungry for AI development, there’s no shortage of people in this area who can develop capable AI through machine learning. We are also seeing how AI is becoming more efficient and more capable, running on less hardware and being pushed further to the edge. Companies such as Deepseek and Tesla have done this. Deepseek by optimizing their training and models, and Tesla by conducting inference in each vehicle.

The topic of war is a painful one. But it is not a topic which will stop being discussed for some time, I fear. Yet, with the advent of LAWS, I believe we may see the day when war is pointless, or at least hope for this outcome. History has shown us the more efficient and violent weapons were often created to end all wars, but have instead only made them more gruesome and horrific. Yet LAWS presents a new variable, one that has never been tried; removing the humans from the conflict. Robots are emerging in all walks of life, military or otherwise. From critical medical deliveries in Rwanda to hot lunch in Los Angeles, these systems are changing our lives, and for the better. A United States with an entirely automated military would mean no more drafts, no more sons dying, no more broken families. No more soldiers returning home with PTSD. No more standing military having to occupy 85 countries. No loss of life during conflicts or wars. And all those benefits could be obtained by other countries simply by developing their own LAWS. The United States also spends considerably more funds on military than all the other countries. Military and space technology always trickles into the consumer market, such as GPS. With the military directing funds towards LAWS, this will help increase development of drones in the private sector as well.

While there is a clear slippery slope argument regarding LAWS, particularly how machine learning AI could endanger humans, I believe that through openness and rules, these risks can be mitigated. One such example would be only allowing LAWS to target other LAWS or weapons. To help ensure accountability, an international oversight committee should be created where every country is represented and no country has veto powers. This would ensure that every populace has representation, and no single country could veto the majority, which would prevent any single power from dominating the decision making process such as we see with the current United Nations. The rules, statutes, bylines, and so forth for each country in regard to LAWS, such as prohibiting the targeting of civilians, would be publicly available. While the path to such a system is fraught with challenges, this approach offers the possibility of a future where the tragedies of war are minimized, and human life, both military and civilian, is spared.

Works Cited

Calhoun, Laurie. “The Real Problem with Lethal Autonomous Weapons Systems (Laws).” Peace Review, vol. 33, no. 2, 2021, pp. 182–182., https://doi.org/10.1080/10402659.2021.1998746.

Coyne, Christopher, and Yahya A Alshamy. “Perverse Consequences of Lethal Autonomous Weapons Systems.” Peace Review, vol. 33, no. 2, 2021, pp. 190–190., https://doi.org/10.1080/10402659.2021.1998747.

Fiske, Lucy, and Rita Shackel. “Ending Rape in War: How Far Have We Come?” Cosmopolitan Civil Societies: An Interdisciplinary Journal, vol. 6, no. 3, 2014, pp. 123–138.

Gabriel Wood, Nathan. “The Problem with Killer Robots.” Journal of Military Ethics, vol. 19, no. 3, 2020, pp. 220–240., https://doi.org/10.1080/15027570.2020.1849966.

Knight, Will. “Autonomous Weapons Are Here, but the World Isn’t Ready for Them.” Wired.com. Conde Nast. 19 Dec. 21

McGlynn, Daniel. “Robotic Warfare.” CQ Researcher, 23 Jan. 2015, pp. 73-96,.

Metcalf, Mark. “The PRC considers military AI ethics: Can autonomy be trusted?.” Frontiers in big data vol. 5 991392. 25 Oct. 2022, https://doi.org/10.3389/fdata.2022.991392

Sauer, Frank. “Stepping Back from the Brink: Why Multilateral Regulation of Autonomy in Weapons Systems Is Difficult, Yet Imperative and Feasible.” International Review of the Red Cross, vol. 102, no. 913, 2020, pp. 235–259., https://doi.org/10.1017/S1816383120000466.