Home Bots & Bullets AI picks targets in Gaza war

AI picks targets in Gaza war

IDF uses ‘Target Factory’ to process data about Hamas-related locations

by Marco van der Hoeven

“In the past, we would produce 50 targets in Gaza in a year. Now, this machine created 100 targets in a single day, with 50% of them being attacked.” This statement by Former IDF chief of staff Aviv Kochavi highlights the role AI plays in the war Israel is fighting against Hamas in Gaza. A ‘target factory’ called Habsora (The Gospel) is responsible for selecting the targets for the relentless bombing campaign. Critics argue this use of technology increases the number of civilian casualties.

On their website the IDF states that in the first month of combat the target factory has been operating around the clock. At the time of this statement, in November, more than 12,000 targets in the Gaza Strip have been attacked.

A ‘Directorate of Targets’, in which regular and reservist soldiers with decoding, cyber and research capabilities serve in a variety of technological roles, has been operating in Israel since 2019 with the aim of making the production of targets large-scale, and negating Hamas’ capabilities in the Gaza Strip. The directorate works with the intelligence bodies in the Air Arm’s Intelligence Wing, the Sea Arm’s Intelligence Division, and the Intelligence Center of the Southern Command.

Automatic tools

The ground forces operating in the Gaza Strip are being fed intelligence information and targets that were built in with the updated operational plan. In real time, these targets were transferred to the fire center in the Southern Command, and with the cooperation of the air arm and the sea arm – hundreds of attacks were carried out in an instant. This was implemented alongside the ‘Pillar of Fire’ project.

According to the IDF the Gospel allows the use of automatic tools to produce targets at a fast pace. With the help of artificial intelligence, and through the rapid and automatic extraction of updated intelligence – it produces a recommendation. According to the IDF there must be a match between the machine’s recommendation and the identification performed by a person.

Destructive nature

Critics, however, argue that the high number of targets cannot be thoroughly checked by humans and say the use of AI appears ‘to have contributed to the destructive nature of the initial stages of Israel’s current war on the Gaza Strip’. An investigation by Israeli +972 Magazine, concludes the use of AI encourages the complete destruction of locations that could have only a small link to Hamas, killing for example a large number civilians in a block of private building where a junior Hamas member probably lives: “Since Israel estimates that there are approximately 30,000 Hamas members in Gaza, and they are all marked for death, the number of potential targets is enormous.”

Combat effectiveness

Former IDF chief of staff Aviv Kochavi said in an interview with Ynet about the use of AI:  “Among all the technological revolutions, artificial intelligence (AI) is likely to be the most radical, for better or worse. The IDF recognized this field years ago and harnessed it to enhance combat effectiveness.”

As an example he mentions the Targeting Directorate established three years ago, a unit comprising hundreds of officers and soldiers, powered by AI capabilities. It processes vast amounts of data faster and more effectively than any human, translating them into actionable targets. In Operation Guardian of the Walls it generated 100 new targets every day. “To put it in perspective, in the past, we would produce 50 targets in Gaza in a year. Now, this machine created 100 targets in a single day, with 50% of them being attacked.”

Risks

He admits artificial intelligence poses risks, but at the same time the genie is already out of the bottle with the release of numerous AI tools online. AI has the capability to surpass human knowledge and decision-making, potentially leading to a scenario where it subtly controls human choices and actions. This raises ethical and social concerns, especially if used by those with low moral standards. This raises the challenge of managing AI’s intelligence and the necessity of developing internal checks and balances within AI systems.

This echoes the sentiment at the United nations, which recently issued a statement saying that without Adequate Guardrails, Artificial Intelligence Threatens Global Security in Evolution from Algorithms to Armaments.

The window of opportunity to enact guardrails against the perils of autonomous weapons and artificial intelligence’s military applications is rapidly closing, as the world prepares for a “technological breakout”, the First Committee (Disarmament and International Security) discussed during a day-long debate on how use of science and technology can undermine security. But the developments in the field, like those in Israel, and Ukraine, show that military necessity trumps theoretical ethical objections.

Photo: Situational assessment conducted by the Chief of the General Staff, LTG Herzi Halevi, at the IAF’s Ops Room, credit IDF

Misschien vind je deze berichten ook interessant