# The IDF's Kill Machine — Investigation Dossier
## BLUF
This investigation documents the Israeli Defense Forces' deployment of AI-assisted targeting systems — principally **Lavender**, **Where's Daddy**, and **The Gospel** — in the Gaza War (2023–present) and assesses their implications for algorithmic warfare doctrine, international humanitarian law, and the global precedent they establish. The investigation is grounded in investigative reporting by +972 Magazine and Local Call (April 2024), cross-referenced against IDF operational disclosures, academic literature on targeting doctrine, and the broader context of US AI targeting programs (Project Maven / Maven Smart System).
**Investigation status:** Active — new information expected as legal proceedings, FOIA requests, and investigative reporting continue.
---
## Key Findings
### 1. Scale of Algorithmic Target Generation
The Lavender system designated approximately **37,000 Palestinians** as potential Hamas military operatives, based on pattern-of-life analysis using machine learning. Sources told +972 Magazine/Local Call that IDF officers approved strikes after reviewing Lavender designations for an average of **20 seconds per target**.
**Analytical assessment:** A 20-second review for a targeting decision that may result in a lethal strike — under conditions of high operational tempo — constitutes a structural failure of meaningful human control. The human is not reviewing the target designation; the human is stamping it.
### 2. The Where's Daddy System
Where's Daddy tracked designated targets to their residential locations and generated strike timing recommendations when the target was assessed to be at home — typically surrounded by family. The use of this system produced the documented pattern of large numbers of multi-generational family deaths in Gaza: the targeting logic deliberately accepted high civilian proximity.
**IDF doctrine cited by sources:** During certain operational periods, the permitted "collateral damage" ratio for a low-ranking Hamas target was up to **20 civilians per strike**. For senior targets, no limit was specified.
### 3. The Gospel (HaBsora) System
Gospel (HaBsora) is a separate AI system used for generating targets of a different type: infrastructure, military sites, and buildings assessed as having military function. Gospel reportedly generated target lists faster than they could be actioned — creating a "bombing factory" dynamic where the constraint was not target identification but strike capacity.
### 4. Automation Bias and Human Review
Multiple IDF sources confirmed to investigators that:
- The systems were treated as highly reliable by junior officers
- Dissenting voices within the IDF who questioned specific designations faced institutional pressure
- The volume of designations was deliberately calibrated to the point where meaningful individual review was structurally impossible
This is textbook [[02 Concepts & Tactics/Automation Bias|automation bias]] operating at institutional scale: human reviewers defer to algorithmic outputs even when they have doubts, because the system has authority and challenging it has institutional cost.
---
## The Lavender System: Technical Assessment
**What Lavender does (assessed from investigative reporting):**
Lavender uses a machine learning model trained on the profiles of known Hamas military operatives to identify individuals whose phone usage, social network, location patterns, and behavioral signatures resemble those operatives. It produces a ranked list of suspected operatives with associated confidence scores.
**What Lavender cannot do:**
- Determine whether a designated individual is actively engaged in military activity at the time of strike
- Verify whether the designation is current (targets may change phones, cease activity, or be misidentified)
- Account for civilians who pattern-match to Hamas operatives (family members, associates, individuals with similar lifestyles)
- Distinguish between a Hamas military commander and a low-ranking member who poses minimal threat
**The intelligence gap Lavender exploits:** In a dense urban environment with a sophisticated adversary that uses human couriers, code-words, and compartmentalized communications, it is genuinely difficult to identify Hamas operatives through traditional intelligence methods. Lavender provides a scalable approximation — but at a systematic error rate that, applied to 37,000 designations, produces thousands of misidentifications.
---
## IHL Implications
International humanitarian law requires:
1. **Distinction:** Attacks must distinguish between combatants and civilians
2. **Proportionality:** Expected civilian harm must not be excessive in relation to anticipated military advantage
3. **Precaution:** Feasible precautions must be taken to minimize civilian harm
The Lavender/Where's Daddy operational doctrine raises serious questions on all three:
- **Distinction:** Algorithmic designation based on pattern-of-life analysis does not constitute individual verification of combatant status at the time of strike
- **Proportionality:** A blanket policy of accepting up to 20 civilian deaths per low-ranking target is not proportionality analysis — it is a fixed ratio applied to categories, not to specific strike assessments
- **Precaution:** Striking targets at home — deliberately — to maximize the probability of finding the target maximizes, not minimizes, civilian exposure
**Legal proceedings underway:** Multiple jurisdictions, including South Africa's ICJ genocide case, international criminal court proceedings, and civil suits in US and European courts, are examining these practices.
---
## The Palantir Dimension
Palantir Technologies has confirmed its Maven Smart System (MSS) is in use with the IDF. The MSS provides multi-domain intelligence fusion — synthesizing SIGINT, IMINT, HUMINT, and OSINT into a unified targeting picture — that feeds the human-decision layer above systems like Lavender and Gospel.
Palantir's role is the epistemological substrate: it determines what information is synthesized and weighted in the intelligence picture that Lavender then processes into target designations. The company's "Ontology" architecture creates structural dependencies: IDF targeting is now epistemologically dependent on Palantir's data model.
See: [[07 Current Investigations/Active Investigations/Palantir Intelligence Dossier]] for the corporate profile.
---
## The Project Maven Precedent
The IDF's algorithmic targeting architecture is not independent of US doctrine. Project Maven (2017) and the Maven Smart System (MSS, Palantir, 2020–present) established the foundational concept: AI compresses the analysis phase of the kill chain; humans approve outputs at machine tempo. The IDF has operationalized this concept more aggressively than any other military, in an active high-tempo urban conflict.
The implication: the Gaza conflict is the global test case for AI-assisted targeting. Every military currently developing algorithmic kill chain tools — the US, China, Russia, UK, France, India — is studying the IDF's operational experience and doctrinal conclusions.
See: [[03 Weapons & Systems/Cyber Capabilities & Tools/Project Maven and Kill Chain Compression]]
---
## Open Investigation Threads
- [ ] Obtain/review full +972 Magazine / Local Call source documentation (published April 2024)
- [ ] Track ICJ proceedings on South Africa genocide case for evidentiary disclosures
- [ ] Monitor US congressional inquiries into Palantir/IDF contracts
- [ ] Assess Israeli government response to + additional reporting (IDF has disputed some specific claims)
- [ ] Track evolution of IDF targeting doctrine in Phase 3+ operations
- [ ] Monitor whether US military adopts similar civilian-damage ratios in CENTCOM doctrine
---
## Source Record
1. **+972 Magazine / Local Call — "Lavender: The AI machine directing Israel's bombing spree"** (April 2024) — Primary source; based on multiple IDF sources
2. **+972 Magazine / Local Call — "A mass assassination factory: Inside Israel's calculated bombing of Gaza"** (November 2023)
3. **IDF official statements** — Dispute specific characterizations; do not deny existence of AI targeting systems
4. **Human Rights Watch / Amnesty International** — Strike pattern analysis supporting documented civilian harm
5. **Bellingcat** — Geolocation analysis of specific strikes
6. **Academic literature** — Jenna Jordan on decapitation effectiveness; Paul Scharre on autonomous weapons
---
## Key Connections
- [[04 Current Crises/Active Conflicts/Gaza War]] — the operational context
- [[07 Current Investigations/Open Leads/Google, Microsoft_ Gaza Abuse Report_]] — corporate infrastructure supporting these operations
- [[07 Current Investigations/Active Investigations/Palantir Intelligence Dossier]] — the AI infrastructure layer
- [[03 Weapons & Systems/Cyber Capabilities & Tools/Project Maven and Kill Chain Compression]] — the US doctrinal predecessor
- [[01 Actors & Entities/14_Corporations_&_Tech/Lavender]] — the system profile
- [[02 Concepts & Tactics/Kill Chain]] — the targeting framework being compressed
- [[02 Concepts & Tactics/Algorithmic Warfare]] — the doctrine being operationalized
- [[06 Authors & Thinkers/Contemporary Analysts/Jenna Jordan]] — decapitation doctrine critique: why this strategy may not achieve strategic objectives
- [[Publish/Quartz/The IDF's Kill Machine — How Israel Industrialised Targeting with AI]] — the published editorial based on this investigation