In yesterday's post we looked at countries employing Artificial Intelligence (AI) to influence political elections in specific areas of the world. Yet earlier on this week there was also a report that raised more fears about the use of AI in another field – military operations and war.
In a recent report by Jerusalem-based journalist and filmmaker Yuval Abraham for Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call, concerns were raised regarding the ethical use of AI in military operations. Six Israeli intelligence officers provided testimony suggesting that the Israeli army employed AI systems to identify targets associated with Hamas and Palestinian Islamic Jihad (PIJ) during the Israel-Gaza conflict that began in October 2023.
Last October, Hamas militants initiated an unprecedented and atrocious attack attack on Israel from the Gaza Strip, killing 1,200 people and taking 240 hostages. Israel retaliated and since then the Israeli military conducted air strikes on Gaza and launched a ground offensive. A press release (dated April 5th, 2024) from the United Nations (UN) recently stated that, according to Gaza's Ministry of Health, since the conflict started more than 32,000 people were killed and over 75,000 injured.
Abraham's report starts with a 2021 book titled "The Human-Machine Team: How to Create Synergy Between Human and Artificial Intelligence That Will Revolutionize Our World," by "Brigadier General Y.S." The book, attributed to the current commander of Israel's elite intelligence unit 8200, advocates for the development of a system capable of swiftly processing vast data volumes to produce numerous potential military strike targets during wartime. The author argues that such technology would address the perceived "human bottleneck" in both target identification and decision-making processes.
Abraham's report then passes to shed light on the existence of an actual AI-based program called "Lavender," developed by Israeli intelligence, which allegedly played a significant role especially in the early stages of the war, in identifying potential targets, including individuals suspected of affiliation with Hamas and PIJ.
Lavender reportedly analyzed data collected on most of the 2.3 million residents of the Gaza Strip through mass surveillance systems, assigning each person in Gaza a rating based on their likelihood of being a militant.
While Lavender generated potential targets based on identified characteristics associated with known Hamas and PIJ operatives, final decisions regarding strikes were made by human operators. However, the report suggests that thorough verification processes to confirm the identities and affiliations of targets may have been lacking.
The approval to employ Lavender's kill list reportedly came approximately two weeks into the conflict. Intelligence personnel were tasked with manually verifying a random selection of several hundred targets chosen by the AI system. Upon finding an accuracy rate of 90% in identifying an individual's affiliation with Hamas, they ceased conducting thorough checks. Instead, they ensured that the Lavender-marked targets were male and then executed orders by bombing their residences. Human intervention occurred only at the conclusion of the selection process, after a cursory review, despite the system's known error rate of about 10%.
Occasionally, individuals with loose or nonexistent ties to Hamas or PIJ were erroneously flagged by the system. Furthermore, communication patterns resembling those of known operatives, including police and civil defense workers, relatives of militants, and Gaza residents with similar names to operatives, were also flagged. At one point, the system indicated approximately 37,000 individuals as potential targets, a figure subject to fluctuation based on the interpretation of the term "operative" within the system's training data.
This modus operandi may account for the significant number of casualties, with data from the Palestinian Health Ministry suggesting that Israel was responsible for the deaths of around 15,000 Palestinians within the first six weeks of the conflict.
Israeli airstrikes predominantly targeted individuals in their homes, often during nighttime hours when families were present, as it was deemed easier to locate targets in this manner from an intelligence perspective. Automated tracking systems, such as "Where's Daddy?," were used to monitor targets and execute bombings upon their return home. "Broad hunting" tactics (as one source called them) were employed, wherein hundreds of targets were input into the system, and subsequent killings were carried out based on the system's output.
This approach led to a significant number of fatalities, particularly during the initial stages of the conflict, with more than half of the casualties belonging to 1,340 families, many of whom were entirely wiped out within their homes, according to UN figures. Lavender and similar systems were reportedly responsible for these fatalities, despite issues such as individuals swapping residences or the absence of targets at the time of bombings.
Consequently, thousands of Palestinians, predominantly women, children, or non-combatants, fell victim to Israeli airstrikes due to decisions made by the AI program. These casualties, termed "collateral damage" in military terminology, included entire families. Allegedly, during the early weeks of the conflict, the Israel Defence Forces (IDF) authorized the killing of up to 15 or 20 civilians for every junior militant marked by Lavender, a departure from past protocols. Lavender, which targets individuals for assassination, differs fundamentally from another AI system, "The Gospel," which identifies structures used by militants.
Testimonies provided in the report underscore the chilling nature of the operations, with intelligence officers expressing preference for using unguided missiles, known as "dumb bombs", against alleged junior militants marked by Lavender. This choice was made to avoid wasting more expensive precision bombs and resources on targets that were deemed less significant.
Following the publication of the report by +972 and Local Call, and subsequent coverage by The Guardian, the IDF issued a statement in which it denied employing AI systems during the conflict, asserting that the IDF do not use an Artificial Intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist and that the system referred to in the report is just a database with the purpose of cross-referencing intelligence sources to provide information on terrorist organizations.
Yet, while we must approach things with caution, the disturbing images and videos depicting the unprecedented destruction in Gaza, coupled with the staggering number of civilian casualties, seem to validate the use of new technologies that can assist human beings in swiftly destroying designated targets.
This situation presents new considerations for the application of AI: while countries are deliberating about its use and the European Union has passed regulations governing AI applications in a wide range of fields, the prospect of AI in military contexts, something that appears far more ominous than other applications, hasn't been properly assessed yet (will it be filed under a new sort of crime, such as "AI-assisted genocide"?).
This raises concerns about the potential dystopian future wherein technology that only last year was mainly the subject of many debates about copyright infringements may now be also used to eradicate human lives. If confirmed, such actions necessitate thorough investigation and condemnation at the highest levels. Ideally, AI should help streamlining our lives and not serve as a tool for annihilation. There is a tragic parallelism here: many are using AI to cut costs in different industries, something that is causing job losses; also in this case by using an AI system, an army would be cutting costs (junior operatives allegedly marked by Lavender were killed with dumb bombs, in the interest of saving more expensive armaments like floor bombs), but here the questions is not losing your job but your life.
This story about the alleged use of AI in military operations underscores therefore the ethical dilemmas surrounding AI in warfare, emphasizing concerns about accuracy, transparency, and potential atrocities. Addressing these issues is crucial as technology evolves, necessitating measures to mitigate harm and regulate AI's use in conflict.
Comments
You can follow this conversation by subscribing to the comment feed for this post.