November/26/2020
Author: Yoseph Genene

Have you ever wondered how YouTube videos are recommended for you or how the ads and suggestions relating to your previous activities on the internet like Facebook or any other social media platforms came from? Saving the data and privacy concerns for another time, the immediate answer we get for what’s behind all these is artificial intelligence algorithm. More than we care to admit, Artificial Intelligence (AI) is tremendously affecting our daily lives.
Humanity has always been fascinated about creating an artificial life. The concept of AI as some researchers argue dates back to ancient Greek mythologies. Hesiod`s Talos, the bronze man to be the warder of crete incorporate the idea of intelligent robot. Hesiod’s originally described Pandora as an artificial, evil woman built by Hephaestus and sent to Earth on the orders of Zeus to punish humans for discovering fire.
Oxford dictionaries define AI as a theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Humankind has a lot of experiences pulling in emerging technologies to military use; AI is no exception. IHL is not opposed to new technologies in warfare. Nonetheless, it requires that any new technology of warfare must be used, and must be capable of being used, in compliance with existing rules of IHL.
One of the fundamental principles of IHL which is the principle of distinction dictates that “the parties to the conflict must at all times distinguish between civilians and combatants. Attacks may only be directed against combatants. Attacks must not be directed against civilians”.
Combatants must also distinguish themselves (i.e., allow their enemies to identify them) from all other persons (civilians), who may not be attacked nor directly participate in the hostilities.
The introduction of AI in armed conflicts has perceptibly brought challenges in IHL, especially to the principle of distinction. Such technological fifth generation warfare methods cannot discriminate between combatants and non-combatants or other immune actors such as service workers, retirees, combatants that are wounded, have surrendered, or are mentally ill in a way that would satisfy the principle of distinction. For example, in Pakistan an attempt to kill 41 men resulted in the deaths of an estimated 1,147 people. Also, in Yemen 17 named men were targeted multiple times but the strikes on them killed 273 people; at least seven of them are children. Both attacks were conducted by US drones. From 2004- 2014 the number of US drone strikes in Pakistan reached 400. Research by the Bureau of Investigative Journalism finds that fewer than 4% of the people killed have been identified by available records as named members of Al- Qaeda.
What jeopardizes the principle of distinction is that we do not have an adequate definition of a civilian that we can translate into computer code. The Geneva conventions do not provide a definition that could give a machine with the necessary information. Additional Protocol I defines a civilian in the negative sense as someone who is not a combatant. AI lack components required to ensure compliance with the principle of distinction. First, they do not have adequate sensory or vision processing systems for separating combatants from civilians. Even if the machines had adequate sensing mechanisms to detect the difference between civilians and uniform-wearing military, they would still be missing battlefield awareness or common sense reasoning to assist in discrimination decisions, as like article 44(3) of the Additional Protocol I of the Geneva Conventions acknowledges, “there are situations in armed conflicts where, owing to the nature of the hostilities an armed combatant cannot so distinguish himself.”
SKYNET, uses machine learning algorithm on the cellular network metadata of individuals to try and rate each person`s likelihood of being a terrorist. As most big data businesses, 0.0008% false positive rate seems very low. But, unlike other businesses its failure is not about unwanted YouTube recommendations or displaying an ad to the wrong person; here it is between life and death of a human being. Therefore, if we take 0.0008% of let’s say 60 million population its 48,000 innocent civilians that will be targeted. Accordingly, AI algorithm will consider them as a as threat and certainly they will be a victims of an attack.
IHL states that solely the presence of military or civilians direct participation in hostilities (DPH) among the civilian population does not deprive the population a protection from an attack. This requires the need for military commanders to issue context-based decisions. In addition, identifying hors de combat and combatants or civilians DPH surrendering also require a contextual analysis, and the ability to interpret human intentions. AI inherently lacks both capabilities, which are of a paramount importance according to IHL.
The ideal solution for AI in warfare to comply with the principle of distinction would be full-fledged automatization of the battlefield in the foreseeable future. Thereupon, it will lead us to “bloodless fights” due to the exclusion of humans from the battlefield with combat predominantly conducted between AI guided machines.
When we come to the feasible measures to achieve the ultimate purpose of IHL which is to limit effects of armed conflict for humanitarian reasons, and particularly to exercise the principle of distinction in warfare we can suggest two recommendations. The first one will be arranging methods to team up human intelligence in interpreting machine languages inputs. Secondly, the international community should closely monitor conflicts that implement the use of AI in warfare so as to further prevent, halt and sanction actions of the perpetrators.

Yoseph Genene is an undergraduate student at the School of Law, College of Law and Governance Studies, Addis Ababa University. He can be reached through the email address geneneyoseph@gmail.com or twitter @geneneyoseph.