# Technological Complicity: An Assessment of Google and Microsoft Services in Relation to Alleged War Crimes and Human Rights Abuses by Israeli Forces in Gaza ## Executive Summary This report provides an intelligence assessment of the role of technologies and services provided by Google and Microsoft in potentially enabling or facilitating alleged war crimes, crimes against humanity, and other severe human rights abuses committed by Israeli Forces in the context of the conflict in Gaza. Evidence indicates that both corporations supply advanced cloud computing infrastructure and artificial intelligence (AI) services to the Israeli government and its military, including the Ministry of Defence. Google's involvement, notably through "Project Nimbus" and direct contractual agreements with the Israeli Ministry of Defence, provides capabilities such as the Google Cloud Platform, secure data hosting within Israel, and advanced AI tools like Vertex AI and Gemini. Microsoft has confirmed the provision of Azure cloud services and Azure AI to the Israeli Ministry of Defence, with reports detailing its use for intelligence processing and support for AI-driven targeting systems. These corporate technologies are assessed to be foundational to the operation of Israeli-developed AI targeting systems, including "The Gospel" (Habsora) for automated infrastructure target generation, and "Lavender" for identifying human targets. These AI systems are reportedly characterized by high-speed, high-volume target generation with minimal human oversight, contributing to what internal sources have termed a "mass assassination factory." The operational deployment of such systems is linked to the widespread and unprecedented scale of civilian casualties, destruction of civilian infrastructure, and other grave human rights violations documented in Gaza by numerous international organizations. Corporate denials of harm, claims of adherence to internal ethics policies, and assertions of limited visibility into the end-use of their technologies are critically examined against evidence of their direct engagement with military clients, the nature of the technologies supplied, and internal company dynamics, including employee dissent and alleged suppression thereof. This analysis raises significant questions regarding corporate due diligence, responsibility, and potential complicity under international law. The report concludes by assessing the broader implications for the technology industry and international efforts to regulate the use of advanced technologies in warfare, offering strategic recommendations for international bodies, states, the corporations themselves, and civil society. ## Introduction The ongoing conflict in Gaza has precipitated a humanitarian crisis of profound severity, marked by extensive civilian casualties, widespread destruction of infrastructure, and allegations of serious violations of international law. Amidst this, the role of advanced technologies, particularly artificial intelligence and cloud computing, in modern warfare has come under intense scrutiny. This report focuses on the involvement of two of the world'S largest technology corporations, Google (Alphabet Inc.) and Microsoft Corporation, and the ways in which their services and technologies provided to Israeli state and military entities may be implicated in the commission of alleged war crimes, crimes against humanity, and other human rights abuses by Israeli Forces in Gaza. The objective of this analysis is to meticulously examine the nature and extent of Google's and Microsoft's technological provisions to Israeli governmental and military bodies, including under contracts such as Project Nimbus and direct agreements with the Israeli Ministry of Defence. It will analyze how these technologies, particularly cloud infrastructure and AI capabilities, potentially underpin or directly enable Israeli-developed AI-driven targeting systems like "The Gospel" and "Lavender." Furthermore, the report will connect these technological deployments to documented human rights violations and alleged international crimes in Gaza, as reported by credible human rights organizations and investigative journalism. The methodology employed involves the systematic analysis of open-source intelligence, including corporate statements and policies, disclosures from human rights organizations (such as Amnesty International, Human Rights Watch, and UN bodies), investigative reports from media outlets, and information regarding internal corporate dynamics and employee activism. The scope of this assessment is specifically narrowed to Google and Microsoft, their relevant technological offerings, the operational impact of Israeli AI targeting systems, the documented human rights situation in Gaza during the relevant period, and the ensuing questions of corporate responsibility and accountability under international legal frameworks. This report aims to provide a detailed and nuanced understanding of the complex nexus between advanced technology, corporate actors, and alleged atrocities in a contemporary armed conflict. ## 1. Google's Technological Entanglement with Israeli State and Military Entities Google's engagement with Israeli state and military entities is multifaceted, primarily through the large-scale "Project Nimbus" and direct contractual relationships with the Israeli Ministry of Defence (MoD). These arrangements provide Israeli authorities with access to sophisticated cloud computing and artificial intelligence capabilities. ### 1.1. Project Nimbus: A Strategic Cloud Partnership Project Nimbus is a $1.2 billion contract awarded in April 2021 to Google and Amazon Web Services (AWS) to provide comprehensive cloud computing solutions to the Israeli government, including its "defense establishment".1 Under this project, Google is tasked with establishing local cloud sites within Israel's borders, designed to keep information secure under Israeli security guidelines.1 The stated objectives of Project Nimbus are to furnish "the government, the defense establishment, and others with an all-encompassing cloud solution," aiming to improve governmental work processes, accelerate digital transformation, and enhance command, control, and cyber defense capabilities.2 A significant and controversial aspect of the Project Nimbus contract is a clause that reportedly forbids Google and Amazon from halting services due to boycott pressure, effectively locking the companies into service provision regardless of external pressures or escalating human rights concerns.1 ### 1.2. Direct Contracts with the Israeli Ministry of Defence Beyond the broader Nimbus framework, evidence indicates direct and deepening contractual ties between Google and the Israeli MoD for advanced AI and cloud services. A TIME report from April 2024, based on a viewed company document, revealed that Google provides cloud computing services directly to the Israeli MoD and had negotiated to deepen this partnership during the war in Gaza.3 According to this document, the Israeli MoD has its own "landing zone" into Google Cloud, a secure entry point to Google-provided computing infrastructure, allowing the ministry to store and process data and access AI services.3 Furthermore, a draft contract dated March 27, 2024, showed the MoD seeking consulting assistance from Google to expand its Google Cloud access, specifically to enable "multiple units" to access automation technologies. This consulting service alone was valued at over $1 million.3 The direct linkage to Project Nimbus is evident in comments on the contract document by a Google employee, noting that signatures would be "completed offline as it's an Israel/Nimbus deal," and that the MoD received a 15% discount on consulting fees due to the "Nimbus framework".3 This contract marked the first public confirmation of the Israeli MoD as a direct Google Cloud customer, with the work described as "phase 2" of a wider project to build out the ministry's cloud architecture.3 ### 1.3. Specific Google AI/Cloud Capabilities Provided The services offered to Israeli entities through Google Cloud are extensive. Google Cloud Platform's AI tools can provide capabilities for facial detection, automated image categorization, object tracking, and sentiment analysis.1 Specific AI platforms have also been requested by or provided to the Israeli military. Shortly after October 2023, the Israeli MoD urgently sought to expand its usage of Google's Vertex AI, a platform that applies AI algorithms to a client's own data.5 Documents as recent as November 2024 also indicated a request by the Israeli military for the use of Google's Gemini AI technology.5 ### 1.4. Google's Official Stance vs. Reported Realities and Leaked Internal Assessments Google has consistently maintained that its work for the Israeli government, including Project Nimbus, is primarily for civilian workloads such as finance, healthcare, transportation, and education, and is "not directed at highly sensitive or classified military workloads relevant to weapons or intelligence services".1 However, this official stance is sharply contradicted by several factors. Firstly, the Israeli government's own description of Project Nimbus explicitly includes "the defense establishment" as a key beneficiary.1 Secondly, the direct contracts and consultations with the Israeli MoD for advanced AI and automation technologies point to a clear military application.3 Most damningly, a confidential internal Google report, obtained by The Intercept and reported in May 2025, revealed that Google knew before signing the Project Nimbus deal that it couldn't fully control or monitor how Israel and its military would use the technology.6 The report explicitly acknowledged the risk of Google's software being used to harm Palestinians and noted that the contract could even obligate Google to stonewall criminal investigations by other nations into Israel's use of its technology. This internal assessment also highlighted that the deal would require unprecedented close collaboration with the Israeli security establishment, including joint drills and intelligence sharing.6 A third-party consultant hired by Google to vet the deal went so far as to recommend that the company withhold machine learning and AI tools from Israel precisely because of these risk factors.6 This indicates a significant disparity between Google's public assurances and its internal understanding of the risks and realities of its engagement with the Israeli military. The evolution of direct MoD contracts for sophisticated AI services, post-dating the initial Nimbus agreement, further suggests an expanding, not diminishing, military relationship. Google employees have also stated that the company possesses limited ability to monitor what customers, especially sovereign nations like Israel, are doing on its cloud infrastructure.3 The contractual obligation preventing Google from halting services under Project Nimbus due to boycott pressure 1, when combined with the company's alleged foreknowledge of its inability to conduct adequate due diligence and prevent misuse 6, creates a perilous situation. As allegations of war crimes and severe human rights abuses by Israeli forces in Gaza have escalated 7, Google remains contractually bound to provide its services. This establishes a disturbing precedent where commercial agreements with state actors in conflict zones can supersede a corporation's purported ethical responsibilities and human rights commitments, effectively locking them into partnerships that may facilitate ongoing abuses. ### 1.5. Employee Dissent (NoTechForApartheid) and Corporate Responses Google's involvement with the Israeli military has sparked significant internal dissent. Employees have raised concerns that Project Nimbus would lead to further surveillance of Palestinians, unlawful data collection, and facilitation of Israel's illegal settlement expansion.1 This dissent coalesced into protest groups such as "Googlers against Genocide".1 The company's response to this activism has been severe. In March 2024, a Google Cloud software engineer was fired after a video of them shouting "I refuse to build technology that empowers genocide" at a company event went viral.1 In April 2024, dozens more employees were fired for participating in sit-ins at Google offices to protest Project Nimbus 1, with the total number of related staff cuts reaching 50.1 Ariel Koren, a former Google marketing manager, resigned in 2022, stating that Google "systematically silences Palestinian, Jewish, Arab and Muslim voices concerned about Google's complicity in violations of Palestinian human rights" and creates an "environment of fear".1 Adding to employee concerns, it was reported that Google has been matching employee donations to pro-Israeli charities, including Friends of the Israeli Defence Forces (FIDF), which supports Israeli soldiers.5 The pattern of employee dissent, subsequent firings, and allegations of suppressing internal voices critical of Israeli contracts at Google mirrors similar occurrences at Microsoft. This parallel response suggests a broader trend within the tech industry where lucrative government and military contracts in conflict zones are prioritized over internal ethical challenges. Such actions risk creating a chilling effect on internal whistleblowing and accountability, weakening crucial internal oversight mechanisms that could otherwise prevent the harmful application of powerful technologies. **Table 1: Google's Key Contracts and AI/Cloud Services for Israeli State/Military Entities** |**Contract/Project Name**|**Key Services Provided**|**Reported Value/Scope**|**Stated Purpose (Official)**|**Key Concerns/Controversies**| |---|---|---|---|---| |**Project Nimbus**|Google Cloud Platform (GCP), local cloud sites in Israel, AI/ML tools (potential for facial detection, object tracking, etc.)|$1.2 billion (shared with Amazon)|"All-encompassing cloud solution for the government, the defense establishment, and others" 1|Use by "defense establishment," contractual inability to halt service due to boycott pressure, concerns over surveillance and human rights abuses, internal dissent 1| |---|---|---|---|---| |**MoD Direct Contract (Phase 1)**|Cloud computing services, data storage, data processing, AI services|Undisclosed (referenced as prior work to Phase 2)|Not explicitly stated, implied support for MoD operations|Direct military client, lack of transparency on specific applications 3| |---|---|---|---|---| |**MoD Direct Contract (Phase 2)**|Consulting for Google Cloud access expansion, enabling "multiple units" to access automation technologies, Vertex AI access|>$1 million (for consulting, March 2024 draft contract)|Expand MoD access to automation technologies via Google Cloud|Deepening military partnership during Gaza war, provision of advanced AI (Vertex AI) to military units, "Nimbus framework" discount indicates linkage 3| |---|---|---|---|---| |**MoD AI Tool Access**|Access to Google's Vertex AI, request for Google's Gemini AI technology|Not specified|Enhance MoD's AI capabilities|Provision of cutting-edge AI tools (Vertex, Gemini) to military during active conflict, employee warnings about losing out to competitors if access denied 5| |---|---|---|---|---| ## 2. Microsoft's Provision of Technology to Israeli State and Military Entities Microsoft Corporation has also confirmed its role as a significant technology provider to Israeli state and military entities, offering a suite of Azure cloud services and artificial intelligence capabilities. ### 2.1. Azure Cloud and AI Services Microsoft has officially acknowledged providing software, professional services, Azure cloud services, and Azure AI services, including language translation capabilities, to the Israeli Ministry of Defence (IMOD).10 Reports suggest that during the war in Gaza, Microsoft supplied Israel's defense services with at least $10 million in computing and storage services.10 Investigative reports have shed further light on the application of these services. An investigation by The Guardian indicated that Microsoft's Azure cloud computing platform was utilized across Israel's air, ground, and naval forces, as well as by its intelligence department, for purposes including administrative tasks, combat support, and intelligence activities.10 An Associated Press (AP) investigation claimed that Microsoft's Azure technology is used by the Israeli military to "transcribe, translate, and process intelligence gathered through mass surveillance," which can then be cross-checked with Israel's in-house AI-enabled targeting systems.12 ### 2.2. Specific Microsoft Technologies (including OpenAI's GPT-4) and Their Reported Applications A particularly concerning development is the alleged provision by Microsoft of access to OpenAI's GPT-4 model to the Israeli military.10 This reportedly occurred after OpenAI, in which Microsoft is a major investor and partner, revised its policy in January 2024 to no longer explicitly ban work with military and intelligence customers.10 The AP investigation also cited claims that AI models from Microsoft and OpenAI were used by the Israeli military as part of a program to select bombing targets, with the usage of such AI models through Azure reportedly increasing nearly 200 times after October 7, 2023.14 The introduction of cutting-edge commercial AI models like GPT-4, allegedly facilitated by Microsoft, into an active and intense conflict such as the one in Gaza signifies a potentially hazardous escalation. If these powerful generative AI models are indeed being used for military purposes – for example, in processing intelligence, analyzing potential targets, or even in generating propaganda – this represents a rapid integration of advanced commercial AI into warfare scenarios. This integration appears to be outpacing the development of robust ethical guidelines, regulatory frameworks, and thorough assessments of the potential consequences for civilian populations and the overall conduct of hostilities. ### 2.3. Microsoft's Official Stance, Internal Reviews, Denials of Harm, and Acknowledged Lack of End-Use Visibility In response to mounting concerns from its employees and the public, Microsoft released a statement on May 15, 2025, acknowledging its provision of AI and cloud services to the Israeli military.10 The company stated it had conducted an internal review and engaged an external firm for additional fact-finding. Based on these reviews, Microsoft asserted it had found "no evidence to date that Microsoft's Azure and AI technologies have been used to target or harm people in the conflict in Gaza".10 Microsoft maintains that its technology is bound by its terms of service and conditions of use, which "require customers to implement core responsible AI practices–such as human oversight and access controls—and prohibit the use of our cloud and AI services in any manner that inflicts harm".10 The company also acknowledged providing "limited emergency support" to the Israeli government in the weeks following October 7, 2023, to "help rescue hostages," stating this was done with significant oversight and that some requests were denied.10 Microsoft has also stated that it does not build or provide surveillance or combat applications to Israel's military.11 Crucially, however, Microsoft has admitted that it "does not have visibility into how customers use our software on their own servers or other devices".10 In a similar vein, the company noted it does not "have visibility to the IMOD's government cloud operations, which are supported through contracts with cloud providers other than Microsoft".12 This admission critically undermines its assertion of "no evidence of harm." If Microsoft lacks visibility into these key operational environments – where the Israeli military might deploy its own AI targeting systems (like Lavender or The Gospel) that require substantial computational resources, potentially hosted on on-premise servers running Microsoft software or Azure instances managed by third-party providers – then its claim of "no evidence of harm" is necessarily based on an incomplete picture. Such a stance cannot definitively rule out the misuse of its technologies. This gap between assertion and visibility allows Microsoft to maintain its contractual relationships while potentially deflecting direct responsibility for harms that its technologies might enable, a posture consistent with plausible deniability. ### 2.4. Employee Activism (No Azure for Apartheid) and Corporate Retaliation Similar to Google, Microsoft has faced considerable internal opposition to its contracts with the Israeli military. Employees have publicly protested these arrangements.10 The company has responded with disciplinary actions, including firing two employees in October 2024 for holding a vigil for Palestinian refugees 10, removing five employees from a meeting with CEO Satya Nadella in February 2025 for protesting the contracts 10, and firing software engineer Joe Lopez in May 2025 after he protested at the company's Build developer conference.15 An advocacy group named "No Azure for Apartheid," comprising current and former Microsoft employees, has called for Microsoft to terminate all Azure contracts and partnerships with the Israeli military and government.12 This group, and others like the Council on American-Islamic Relations, have also accused Microsoft of censoring internal employee communications by blocking emails containing keywords such as "Palestine," "Gaza," "apartheid," or "genocide".19 Microsoft's reported response has been that it has taken measures to reduce "unsolicited emails" sent to large numbers of employees, without specifically addressing the keyword censorship allegation.20 Microsoft's public commitment to human rights and "responsible AI practices," as detailed in its various policy statements 10, appears to be inconsistently applied or deemed insufficient when confronted with lucrative military contracts in high-risk environments like the current Gaza conflict. The actions taken against employees who voice concerns about these contracts and the potential for complicity in human rights abuses further suggest that commercial interests may, in practice, supersede stated ethical commitments and the functionality of internal accountability mechanisms. This raises serious questions about the operational effectiveness and sincerity of Microsoft's human rights framework when significant business interests are at stake. **Table 2: Microsoft's Key Contracts and AI/Cloud Services for Israeli State/Military Entities** |**Service/Technology Type**|**Recipient Entity**|**Reported Applications**|**Microsoft's Stated Position/Denials**|**Key Concerns/Controversies**| |---|---|---|---|---| |**Azure Cloud Services**|Israeli Ministry of Defence (IMOD), Israeli Government|Data storage, computing services, support for air/ground/naval forces, intelligence department administrative and operational activities 10|"Standard commercial relationship" 12; "No evidence of harm" 10; "Does not have visibility into customer use on own servers" 10|Use in active conflict with high civilian casualties; alleged use for mass surveillance data processing for targeting 12; lack of end-use visibility; employee protests and firings 10| |---|---|---|---|---| |**Azure AI Services (incl. language translation)**|Israeli Ministry of Defence (IMOD)|Language translation, intelligence processing, potential support for target selection 10|"No evidence of harm"; "Bound by Terms of Service" requiring responsible AI and prohibiting harm 10|Potential integration with Israeli AI targeting systems (e.g., Lavender, The Gospel); concerns about automation bias and reduced human oversight in lethal decision-making 12| |---|---|---|---|---| |**OpenAI GPT-4 Access (allegedly via Microsoft)**|Israeli Military|Not explicitly detailed by Microsoft; potentially for intelligence analysis, data processing, target analysis support 10|Microsoft has not directly addressed GPT-4 provision in this context in available statements. OpenAI policy changed Jan 2024 10|Introduction of advanced commercial generative AI into active conflict zone; ethical concerns about untested military applications of powerful AI models; potential for misuse in information operations or flawed analysis 10| |---|---|---|---|---| |**Software & Professional Services**|Israeli Ministry of Defence (IMOD)|General software support, consulting services 10|Part of "standard commercial relationship" 12|Broad categories with little public detail on specific military applications; potential to support overall military IT infrastructure involved in the conflict 10| |---|---|---|---|---| |**"Emergency Support" (post-Oct 7)**|Israeli Government|"Help rescue hostages" 10|"Limited basis," "significant oversight," "some requests approved, others denied," "honoring privacy and rights of civilians in Gaza" 10|Lack of transparency on specific technologies used and nature of support; potential for dual-use applications beyond stated humanitarian goals in a highly charged environment 10| |---|---|---|---|---| ## 3. Israeli AI-Driven Targeting Systems: Architecture and Operational Impact The Israeli Defence Forces (IDF) have reportedly integrated several sophisticated artificial intelligence systems into their targeting processes, particularly in the context of operations in Gaza. These systems, often developed by elite intelligence units like Unit 8200, are designed to rapidly process vast amounts of data and generate targets at an unprecedented scale. Key among these are "The Gospel" (Habsora), "Lavender," "Where's Daddy?," and the AI chatbot "Genie." ### 3.1. "The Gospel" (Habsora): Automated Generation of Infrastructure Targets "The Gospel," also known as Habsora, is an AI system employed by the IDF to generate recommendations for bombing targets, with a focus on buildings and infrastructure suspected of military use.17 This system processes enormous quantities of data, including surveillance imagery and signals intelligence (SIGINT), to identify potential targets.23 Its operational use has dramatically accelerated the pace of target generation, with sources describing the output as enabling a "mass assassination factory".17 Former IDF Chief of Staff Aviv Kochavi noted that during Israel's 2021 operations, Habsora could generate 100 targets per day, a stark increase from the approximately 50 targets identified per year previously.17 Human review of targets proposed by "The Gospel" is reportedly minimal. Sources indicate that analysts may spend very little time scrutinizing each AI-generated target due to the sheer volume, effectively acting as a "rubber stamp".23 This system facilitates large-scale bombing campaigns, including strikes on private residences. Disturbingly, the anticipated "collateral damage"—the number of civilians likely to be killed in such strikes—is reportedly calculated in advance and included in the target file.26 ### 3.2. "Lavender": AI-Powered Identification of Human Targets, Error Rates, and Diminished Human Oversight "Lavender" is an AI-powered database that uses machine learning to assign numerical scores to residents of Gaza based on their suspected affiliation with armed groups, effectively creating "kill lists".1 The system analyzes various data inputs, including social connections, membership in specific chat groups, frequency of phone or address changes, and other surveillance data to build profiles and assess individuals.16 A critical concern with "Lavender" is its reported error rate. Investigations by _+972 Magazine_ and Local Call, shared with The Guardian, suggest that the system has approximately a 10% error rate in correctly identifying an individual's affiliation with Hamas.32 Despite this significant margin of error, the IDF allegedly gave sweeping approval for officers to adopt Lavender's kill lists, treating the AI's outputs "as if it were a human decision".16 Human oversight in the "Lavender" process is described as alarmingly superficial. In many cases, it reportedly amounted to a mere "20-second" check by personnel to confirm that the AI-selected target was male, as female targets were often assumed to be errors.16 This cursory review is insufficient to catch the nuanced errors an AI system might make, especially one dealing with life-and-death decisions. The impact of "Lavender" has been extensive. During the initial weeks of the war in Gaza, the system reportedly identified approximately 37,000 Palestinians as suspected militants and thus potential airstrike targets.27 Its outputs have been used to justify bombing individuals in their homes, often resulting in the deaths of entire families who were present at the time of the strike.27 ### 3.3. "Where's Daddy?" and "Genie": Systems for Tracking, Home Targeting, and Battlefield Decision Support Complementing "Lavender" is the "Where's Daddy?" system. This system is specifically designed to track targeted individuals, identified by systems like Lavender, and to trigger strikes when they are located at their family homes.27 It relies on the IDF's ability to associate individuals with their residences and utilizes real-time tracking capabilities.27 The rationale is that individuals are easier to target at home, but this inherently means that strikes are carried out in civilian residences, predictably endangering family members. More recently, reports have emerged of "Genie," a military-grade AI chatbot resembling OpenAI's ChatGPT.25 "Genie" is reportedly used by IDF commanders in Gaza for battlefield decision support. It accesses real-time data from the army's operational cloud, can identify anomalies, summarize events, and generate operational insights, thereby assisting in target selection. Although described by the Israeli military as being in a "trial phase," "Genie" is reportedly already in use during live operations.25 ### 3.4. Analysis: Automation Bias, "Mass Assassination Factory" Concerns, and the Role of Unit 8200 The deployment of these AI systems raises profound concerns. "Automation bias" – the tendency for humans to over-trust and uncritically accept information provided by automated systems – is a significant risk, potentially leading to reduced critical scrutiny of AI-generated targeting recommendations.28 The term "mass assassination factory," used by sources familiar with these systems, aptly describes the high-volume, high-speed targeting process they enable, where the emphasis appears to be on quantity rather than the quality or precision of each individual targeting decision.16 The development of many of these sophisticated AI systems is attributed to Unit 8200, the IDF's elite signals intelligence and cyber warfare unit.11 There are also reports that military reservists with expertise gained at major tech companies, including Google, Meta, and Microsoft, have assisted Unit 8200 in developing AI tools, such as a ChatGPT-like model using intercepted Palestinian data.11 The combined operation of AI systems like "Lavender," with its acknowledged error rate and perfunctory human checks often limited to gender verification, "The Gospel," which facilitates rapid infrastructure targeting including residential buildings with pre-calculated "collateral damage" scores, and "Where's Daddy?," which explicitly targets individuals in their homes, institutionalizes a targeting methodology that systematically deprioritizes meticulous individual verification and the principle of distinction. This operational design inherently leads to a high likelihood of disproportionate civilian harm. Decisions appear to be increasingly based on algorithmic probabilities rather than confirmed, real-time intelligence for each individual strike, representing a lower threshold for the application of lethal force. When an AI system makes the primary "decision" and human personnel merely "rubber stamp" it, the chain of accountability becomes dangerously blurred. It becomes exceedingly difficult to assign criminal responsibility to specific individuals for unlawful strikes, as the machine's "reasoning" is often opaque and the human role is significantly diminished. Furthermore, the development and deployment of these increasingly sophisticated AI targeting systems by entities such as Unit 8200, potentially leveraging expertise from tech company reservists and powered by commercial cloud infrastructure, creates a dangerous feedback loop. Perceived military "successes," such as the rapid generation of targets, likely fuel further investment in and reliance on AI. This risks normalizing lower standards for the application of lethal force and further marginalizing human judgment and ethical considerations under the guise of "algorithmic efficiency" and "precision," despite mounting evidence of errors and extensive civilian harm. Systems like "Lavender," which assign suspicion scores and target individuals based on behavioral traits and associations rather than direct hostile acts, represent a disturbing extension of predictive methodologies, akin to predictive policing, into the realm of armed conflict. This approach fundamentally undermines the presumption of civilian status and due process principles, which are cornerstones of international humanitarian law. Targeting individuals based on an algorithm's probabilistic assessment of their affiliation or potential future threat, rather than confirmed, direct participation in hostilities at the time of targeting, constitutes a grave departure from established legal principles that demand positive identification and adherence to the laws of war. **Table 3: Overview of Israeli AI-Powered Targeting Systems and Reported Capabilities** |**System Name**|**Primary Function**|**Key Data Inputs**|**Reported Human Oversight Level**|**Key Operational Concerns/Impacts**| |---|---|---|---|---| |**The Gospel (Habsora)**|Automated generation of infrastructure targets (buildings, structures)|Surveillance data (imagery, SIGINT), intelligence on movements/behaviors, structural analysis 23|Minimal; described as a "rubber stamp"; analysts spend little time per target due to volume 23|"Mass assassination factory" 25; enables large-scale bombing of private residences; collateral damage calculated in advance; rapid target generation (100/day) 25; developed by Unit 8200 27| |---|---|---|---|---| |**Lavender**|AI-powered identification and scoring of human targets ("kill lists")|Social connections, chat group memberships, phone/address changes, broad surveillance data 16|Minimal; often a "20-second check" to confirm target is male; outputs treated "as if it were a human decision" 16|Reported 10% error rate 32; identified ~37,000 Palestinians as targets 27; used to justify bombing individuals in their homes, often with families; automation bias; potential for systemic bias in data/algorithms 16| |---|---|---|---|---| |**Where's Daddy?**|Tracks targeted individuals and facilitates bombing them when at their family homes|Real-time tracking data, association of individuals with family residences 27|Human executes strike based on AI system alert and location data|Specifically targets individuals in residential settings, inevitably endangering families and non-combatants; increases civilian casualties; linked to "Lavender" targets 27| |---|---|---|---|---| |**Genie**|Military-grade AI chatbot for battlefield decision support|Real-time operational cloud data from across Israeli army systems 25|Interactive querying by commanders; army claims it doesn't autonomously make decisions but assists 25|Assists commanders in target selection and operational insights; used in live operations despite "trial phase" status; potential to influence life-and-death decisions with limited transparency of AI reasoning; developed by Matzpen (Unit 8200 sub-unit) 25| |---|---|---|---|---| ## 4. The Nexus: Connecting Corporate Technologies to Alleged Atrocities in Gaza The provision of advanced cloud computing and artificial intelligence technologies by Google and Microsoft to Israeli state and military entities occurs within a context of widespread and severe human rights violations and alleged international crimes in Gaza. Understanding the potential connections between these technologies and the documented abuses is critical. ### 4.1. Overview of Documented Human Rights Violations and Potential International Crimes in Gaza Numerous international human rights organizations and UN bodies have documented a pattern of severe abuses in Gaza, particularly since October 7, 2023. These include: - **Siege and Starvation:** Amnesty International and Human Rights Watch report that Israel has imposed a devastating siege on Gaza, blocking or severely restricting critical supplies such as food, water, medicine, and fuel. This is described as the deliberate imposition of conditions of life calculated to bring about the physical destruction of Palestinians in Gaza, potentially constituting a genocidal act and the war crime of using starvation of civilians as a method of warfare.7 - **Disproportionate Civilian Casualties and Destruction of Civilian Infrastructure:** Reports indicate tens of thousands of Palestinian civilians killed, a significant percentage of whom are women and children, and vast numbers wounded.7 Human Rights Watch has described the destruction of homes, schools, hospitals, and other essential civilian infrastructure as occurring on an "unprecedented scale".8 - **Forced Displacement:** Israeli military operations and evacuation orders have led to the mass forced displacement of the majority of Gaza's population, which Human Rights Watch has characterized as a crime against humanity.7 - **Use of Human Shields:** The Associated Press has reported accounts from Palestinians and Israeli soldiers suggesting that Israeli troops have systematically forced Palestinians to act as human shields, sending them into potentially dangerous locations to check for explosives or militants.36 - **Sexual and Gender-Based Violence:** A UN OHCHR report details the systematic destruction of sexual and reproductive healthcare facilities in Gaza and alleges the use of sexual and gender-based violence as a strategy of war by Israeli forces, potentially amounting to genocidal acts.9 - **Attacks on Healthcare:** Healthcare facilities, personnel, and transport have been repeatedly attacked, and access to essential medical supplies has been severely curtailed, leading to the collapse of Gaza's healthcare system.7 **Table 4: Summary of Alleged International Crimes and Major Human Rights Abuses in Gaza (Relevant Period)** |**Type of Abuse/Alleged Crime**|**Key Reporting Entities**|**Summary of Key Findings/Allegations from Reports**|**Potential Link to AI/Cloud Technologies**| |---|---|---|---| |**Unlawful Killings/Attacks on Civilians & Civilian Objects**|Amnesty International, Human Rights Watch (HRW), UN OHCHR, +972 Magazine/Local Call, AP|Tens of thousands of civilians killed, many women/children; widespread destruction of homes, schools, hospitals; "unprecedented scale" of destruction 7|AI targeting systems (Gospel, Lavender) generating targets at high speed with minimal oversight, leading to strikes on residential areas and potentially misidentified/disproportionate attacks. Cloud infrastructure enables these AI systems. 26| |---|---|---|---| |**Starvation as a Method of Warfare / Deliberate Imposition of Deadly Conditions (Siege)**|Amnesty International, HRW, UN OHCHR|Deliberate blocking of food, water, medicine, fuel; imposition of conditions calculated for physical destruction; "genocidal act" 7|Cloud services for logistics and data management could support the administration and enforcement of the siege. Surveillance tech (potentially cloud-backed) could monitor population movement and access to resources.| |---|---|---|---| |**Forced Displacement**|HRW, Amnesty International, UN OHCHR|Mass forcible displacement of Palestinians from homes; crime against humanity 7|Mass surveillance capabilities, potentially processed and stored on cloud platforms, can facilitate tracking and control of populations, aiding displacement efforts. AI analysis of population data.| |---|---|---|---| |**Use of Human Shields by Israeli Forces**|Associated Press, Israeli soldier testimonies to Breaking the Silence|Systematic forcing of Palestinians by Israeli troops to enter dangerous areas 36|Command and control systems, potentially utilizing cloud infrastructure for communication and data sharing, could disseminate orders or enable practices related to the use of human shields.| |---|---|---|---| |**Sexual and Gender-Based Violence (SGBV)**|UN OHCHR|Systematic destruction of sexual/reproductive healthcare; SGBV as a strategy of war; genocidal acts 9|Destruction of healthcare infrastructure can be targeted via AI systems. Data systems (cloud-hosted) for managing detention facilities where SGBV might occur.| |---|---|---|---| |**Attacks on Healthcare**|Amnesty International, UN OHCHR, WHO|Attacks on hospitals, clinics, ambulances; denial of medical supplies; healthcare system collapse 7|AI targeting systems may misidentify or disproportionately target medical facilities or surrounding areas. Cloud infrastructure supports the overall military campaign that impacts healthcare.| |---|---|---|---| ### 4.2. Assessment of How Google and Microsoft Technologies Potentially Enable, Facilitate, or Exacerbate These Abuses The advanced technologies provided by Google and Microsoft are not merely passive tools; they possess capabilities that can directly or indirectly contribute to the commission of the alleged atrocities in Gaza: - **Cloud Infrastructure for AI Targeting Systems:** The massive computational power, data storage capacity, and AI service frameworks offered by Google Cloud (under Project Nimbus and direct MoD contracts) and Microsoft Azure are fundamental prerequisites for the Israeli military's AI targeting systems like "The Gospel" and "Lavender" to operate at the scale and speed reported.1 Without such robust and scalable cloud infrastructure, the processing of vast datasets necessary for generating thousands of targets would be severely constrained, if not impossible. This technological backbone is thus a critical enabler of the AI-driven targeting campaigns. - **AI Tools for Intelligence Analysis and Target Nomination:** Google's AI platforms like Vertex AI and Gemini, alongside Microsoft's Azure AI services (which reportedly include access to OpenAI's GPT-4), can be employed to analyze mass surveillance data, transcribe and translate communications, identify patterns of life, and nominate potential targets.3 These analytical capabilities feed directly into systems like "Lavender" and "The Gospel." The speed and scale afforded by these AI tools contribute to the "mass assassination factory" dynamic, potentially overwhelming human oversight mechanisms and thereby increasing the likelihood of targeting errors, disproportionate attacks, and civilian casualties.16 - **Facilitating Mass Surveillance:** The cloud platforms provided by Google and Microsoft offer the capacity to store and process enormous volumes of surveillance data collected on the Palestinian population. This data, which includes communications intercepts, visual surveillance, and other forms of intelligence, forms the primary input for the AI targeting systems and can also support other repressive measures that contribute to human rights violations.1 - **Enabling "Where's Daddy?" Home Targeting:** Real-time data processing, location tracking, and rapid information dissemination capabilities, all of which can be significantly enhanced by cloud infrastructure and AI analytics, are essential for the functioning of systems like "Where's Daddy?" This system, which specifically targets individuals when they are in their homes, directly leads to strikes on residential buildings and the deaths of civilians, including family members.27 - **Supporting Command and Control (e.g., "Genie"):** AI-powered chatbots like "Genie," which operate on military cloud infrastructure (potentially involving Google or Microsoft services), assist IDF commanders in making battlefield decisions, including those related to target selection.25 This directly implicates the underlying technology providers in the operational outcomes that result from these AI-assisted decisions. - **Lack of Visibility as an Enabler:** The claims by both Google (through leaked documents suggesting limited control) and Microsoft (through public admissions of "no visibility" into end-use on client servers or IMOD cloud operations with other providers) are particularly pertinent.6 In a high-risk conflict environment characterized by extensive allegations of international law violations, this lack of oversight or control means the companies cannot guarantee their powerful technologies are _not_ being used in the AI systems implicated in abuses or to directly commit such abuses. This effective abdication of oversight, when providing potent dual-use technologies, can be interpreted as a form of enablement by omission. The combination of scalable cloud infrastructure from Google and Microsoft with AI-driven targeting algorithms developed and deployed by the Israeli military creates a potent synergy. This technological fusion amplifies the speed, scale, and potential indiscriminateness of military operations far beyond what traditional military capabilities would allow. This amplification is a key factor enabling the "unprecedented" levels of destruction and civilian casualties documented in Gaza. The removal of the "human bottleneck" in targeting, as described by proponents of these AI systems 27, directly correlates with an increased capacity for strikes and, in the context of Gaza, a tragically increased toll on civilian lives and infrastructure. Moreover, the use of sophisticated AI, underpinned by corporate cloud services, in targeting operations may foster an "algorithmic alibi." Military actors could attempt to deflect responsibility for civilian harm by attributing lethal decisions to opaque algorithms, while the technology companies simultaneously deny visibility or control over the specific end-use of their products. This dual opacity severely complicates efforts to establish accountability for potential war crimes. The AI systems themselves, being non-sentient tools, cannot be held legally responsible 37, leaving a dangerous accountability vacuum. Beyond direct targeting, the provision of comprehensive cloud solutions, such as those envisioned under Project Nimbus for the entire Israeli "defense establishment" 1, can indirectly support a wide spectrum of activities that contribute to human rights abuses. These could include data management systems for enforcing the siege of Gaza, surveillance networks that enable forced displacement and control of the population, or administrative systems for managing detention facilities where ill-treatment and torture have been reported.8 An exclusive focus on AI in kinetic strikes may obscure these broader, yet crucial, enabling roles of general-purpose cloud infrastructure provided to the entire military and security apparatus. ## 5. Corporate Responsibility, Due Diligence, and Complicity under International Law The provision of advanced technologies by Google and Microsoft to Israeli military and governmental entities, amidst ongoing conflict and widespread allegations of international crimes in Gaza, raises profound questions about corporate responsibility, the adequacy of human rights due diligence, and the potential for legal complicity. ### 5.1. Examination of Google's and Microsoft's Stated Human Rights Policies Against Their Actions Both Google and Microsoft have publicly articulated commitments to respecting human rights. Google's Human Rights Policy states its commitment to the Universal Declaration of Human Rights, the UN Guiding Principles on Business and Human Rights (UNGPs), and the Global Network Initiative (GNI) Principles. The company highlights its Human Rights Program, which is tasked with conducting due diligence, including human rights impact assessments.38 Similarly, Microsoft's Global Human Rights Statement and its Supply Chain Human Rights Policy affirm its commitment to the UNGPs and the OECD Due Diligence Guidance for Responsible Business Conduct. Microsoft details processes for risk assessment (which have identified priority risks like forced labor and health/safety), mitigation measures, and grievance mechanisms.21 However, the continued provision of powerful cloud and AI services to a military engaged in a conflict characterized by extensive allegations of war crimes and crimes against humanity appears to be in tension with these stated commitments. Specifically: - Google's alleged foreknowledge of the risks associated with Project Nimbus and its reported inability to prevent misuse of its technology by the Israeli military, despite recommendations from its own third-party consultant to withhold AI/ML tools 6, directly challenges its commitment to effective due diligence. - Microsoft's admission of "no visibility" into how the IMOD uses its software on its own servers or in government cloud operations supported by other providers 10 represents a fundamental gap in due diligence. This is particularly concerning when providing advanced AI tools with clear potential for military application in a high-risk conflict zone. - The lack of publicly disclosed, specific, and heightened due diligence measures that are commensurate with the extreme risks of providing these technologies in the current context further calls into question the operationalization of their human rights policies. The discrepancy between the elaborate human rights policies published by these corporations and their actions concerning Israeli military contracts suggests that these policies may, in practice, function more as instruments for managing reputational risk rather than as effective, binding constraints on business decisions, especially in high-stakes, high-profit scenarios. The lack of transparency regarding the specific due diligence undertaken for these contentious contracts reinforces this concern. If these policies were being substantively applied, one would expect either a refusal to provide such potent technologies in such a high-risk context or, at a minimum, the imposition of exceptionally stringent, verifiable safeguards and transparency measures, none of which have been publicly evidenced. ### 5.2. Assessment of Due Diligence Failures in High-Risk Conflict Environments The UN Guiding Principles on Business and Human Rights establish a clear responsibility for companies to prevent or mitigate adverse human rights impacts that are directly linked to their operations, products, or services through their business relationships, even if they themselves have not directly contributed to those impacts. This requires robust human rights due diligence. In the case of Google and Microsoft, several factors point to potential due diligence failures: - **Foreseeable Risk:** The risk that advanced cloud and AI technologies provided to a military actively engaged in a conflict like the one in Gaza could be used in ways that violate international humanitarian law (IHL) and human rights law is eminently foreseeable. This is not a generic "dual-use" problem concerning neutral tools; it involves providing powerful capabilities to a specific actor in a specific, highly problematic context where AI-driven targeting systems with known flaws are reportedly in operation. - **Insufficient Mitigation:** Standard contractual terms of service, cited by the companies as safeguards, are insufficient to mitigate the severe risks associated with military use of AI in armed conflict. Meaningful due diligence in such contexts demands more proactive and verifiable measures to prevent harm. - **"No Visibility" as a Due Diligence Deficit:** Claiming "no visibility" into end-use, as Microsoft does 10, is not a defense but rather an admission of a due diligence deficit. Under the UNGPs, if a company cannot prevent or mitigate risks associated with a business relationship, it may need to consider ending that relationship. While cloud and AI technologies are inherently dual-use, the argument that companies are merely providing neutral tools becomes untenable when these technologies are supplied to a military client actively involved in a conflict marked by extensive human rights allegations, and where there is knowledge or strong suspicion of specific AI targeting systems being developed or used. The specific context elevates the due diligence obligation beyond standard commercial practice. It necessitates a heightened scrutiny of potential misuse and a more forceful approach to risk mitigation, potentially including the refusal of contracts if risks cannot be adequately managed. ### 5.3. Discussion of Potential Legal Liability for Corporate Complicity in War Crimes and Crimes Against Humanity The provision of technology that facilitates the commission of international crimes can lead to legal liability for corporate complicity. International criminal law, including precedents from ad hoc tribunals and the Rome Statute of the International Criminal Court, recognizes various modes of liability, such as aiding and abetting.18 Key elements for establishing such complicity generally include: - **Actus Reus (Conduct):** The company's actions (e.g., providing technology, services, or expertise) made a substantial contribution to the commission of the crime by the principal perpetrator (e.g., the military forces). The argument here is that sophisticated AI targeting systems, implicated in alleged war crimes, rely on the kind of cloud infrastructure and AI tools provided by Google and Microsoft. - **Mens Rea (Mental Element):** The company knew, or should have known (constructive knowledge), that its contributions would assist, or were assisting, in the commission of the crimes. Leaked documents suggesting Google's foreknowledge of risks with Project Nimbus 6 are highly relevant here. For Microsoft, while it claims "no visibility," the general knowledge of the conflict's nature, the capabilities of the technologies provided, and public reports on Israeli AI targeting systems could be argued to constitute sufficient basis for constructive knowledge. Experts have increasingly warned about the potential complicity of tech firms in war crimes due to their provision of AI and cloud services to militaries in conflict zones.18 The "algorithmic alibi" – where responsibility for unlawful actions is diffused between the human operator, the opaque AI, and the technology provider – further complicates accountability but does not negate the potential for corporate liability if the elements of complicity are met. The internal dissent within Google and Microsoft, the public condemnations, and the emerging legal analyses of potential corporate complicity signal a shift in expectations. Tech companies are increasingly viewed not as neutral vendors but as powerful actors with significant human rights responsibilities. This evolving landscape suggests that traditional corporate defenses, such as claiming to merely provide tools or asserting lack of visibility into end-use, are becoming less tenable. Companies may face mounting legal, reputational, and financial risks if they fail to demonstrate proactive and verifiable protection of human rights in their dealings with all clients, particularly state actors with the capacity to inflict widespread harm. ## Conclusions The evidence assessed in this report indicates that Google and Microsoft provide critical artificial intelligence and cloud computing infrastructure to the Israeli government and its military. These technologies are foundational to, or highly likely to underpin, the operation of Israeli-developed AI-driven targeting systems, such as "The Gospel" and "Lavender." These systems, characterized by high-speed, high-volume target generation with reportedly minimal and flawed human oversight, are implicated in military operations in Gaza that have resulted in extensive civilian casualties, the destruction of civilian infrastructure, and a humanitarian crisis of catastrophic proportions. These outcomes have led to widespread allegations of war crimes and crimes against humanity. Corporate assertions of "no evidence of harm," adherence to internal ethics policies, and claims of limited visibility into the end-use of their technologies are insufficient to counter the weight of evidence detailing their deep entanglement with the Israeli military apparatus during this period. The documented internal awareness of risks, coupled with the suppression of employee dissent, points to a disturbing prioritization of commercial and strategic interests over fundamental human rights considerations. ### Broader Implications The situation detailed in this report carries profound implications that extend far beyond the specific context of Gaza: 1. **Transparency and Accountability for the Tech Industry:** There is an urgent need for significantly greater transparency from technology companies regarding their contracts and collaborations with state military, security, and intelligence agencies, particularly those operating in or concerning conflict zones. Accountability mechanisms, both internal and external, must be strengthened to ensure that human rights are not subsidiary to profit motives. 2. **Inadequacy of Self-Regulation:** The current model of corporate self-regulation, reliant on internal ethics policies and voluntary commitments, appears inadequate to prevent the misuse of powerful technologies in warfare. This suggests a potential need for binding national and international standards and regulations governing the provision of AI, cloud computing, and surveillance technologies to state security apparatuses, especially those with problematic human rights records. 3. **Ethical Challenges of AI in Warfare:** The deployment of AI in targeting and lethal decision-making processes, as seen in Gaza, highlights acute ethical challenges. These include the risks of automation bias, the diminution of meaningful human control over the use of force, the opacity of algorithmic decision-making ("black box" effect), and the diffusion of accountability for unlawful actions. 4. **Weaponization of Commercial Technology:** The extensive use of commercially developed and provided AI and cloud technologies by military forces blurs the lines between civilian technology companies and traditional defense contractors. This "weaponization" of commercial technology requires a re-evaluation of the responsibilities and obligations of tech companies when their products become integral to the conduct of war. The deep entanglement of major Western technology companies like Google and Microsoft with military operations that are under intense scrutiny for alleged severe violations of international law creates significant strategic vulnerabilities. For the companies themselves, these include legal jeopardy, severe reputational damage, loss of investor confidence, and difficulties in employee recruitment and retention. For their host governments, particularly the United States, there is a risk of diplomatic blowback and an erosion of international standing if they are perceived as enabling or failing to regulate corporate activities that contribute to atrocities. This creates a complex geopolitical dynamic where the actions of private corporations have direct and substantial foreign policy implications. Crucially, the current situation in Gaza, involving the deployment of advanced AI and cloud technologies from major global corporations in a highly controversial armed conflict, serves as a critical test case. The manner in which these issues of corporate responsibility, technological enablement, and accountability are addressed—or fail to be addressed—by international bodies, states, the companies themselves, and civil society will set enduring precedents for the role of technology and technology companies in future armed conflicts worldwide. A failure to establish clear lines of accountability and to restrict the harmful uses of these powerful technologies could inadvertently embolden both tech companies and state militaries to pursue technologically-enabled warfare with fewer ethical and legal constraints in the future. Conversely, robust accountability measures and a commitment to upholding human rights could foster greater corporate responsibility and a more cautious approach to the militarization of advanced technology. ## References 1. Project Nimbus - Wikipedia, acessado em maio 27, 2025, [https://en.wikipedia.org/wiki/Project_Nimbus](https://en.wikipedia.org/wiki/Project_Nimbus) 2. About Nimbus Cloud Me In - Gov.il, acessado em maio 27, 2025, [https://www.gov.il/en/pages/aboutnimbus](https://www.gov.il/en/pages/aboutnimbus) 3. Report reveals Google's contract with Israel Defense Ministry amid Israel-OPT conflict, acessado em maio 27, 2025, [https://www.business-humanrights.org/my/%E1%80%9E%E1%80%90%E1%80%84/report-reveals-googles-contract-with-israel-defense-ministry-amid-israel-opt-conflict/](https://www.business-humanrights.org/my/%E1%80%9E%E1%80%90%E1%80%84/report-reveals-googles-contract-with-israel-defense-ministry-amid-israel-opt-conflict/) 4. Exclusive: Google Workers Revolt Over $1.2 Billion Contract With Israel - Time, acessado em maio 27, 2025, [https://time.com/6964364/exclusive-no-tech-for-apartheid-google-workers-protest-project-nimbus-1-2-billion-contract-with-israel/](https://time.com/6964364/exclusive-no-tech-for-apartheid-google-workers-protest-project-nimbus-1-2-billion-contract-with-israel/) 5. Google facilitated AI tools for Israeli military during war on Gaza, says report, acessado em maio 27, 2025, [https://www.middleeasteye.net/news/google-facilitated-ai-tools-israel-military-war-gaza-report](https://www.middleeasteye.net/news/google-facilitated-ai-tools-israel-military-war-gaza-report) 6. Leaked documents reveal Google's alleged awareness of human ..., acessado em maio 27, 2025, [https://www.business-humanrights.org/en/latest-news/leaked-documents-reveal-googles-alleged-awareness-of-human-rights-risks-in-israels-project-nimbus-deal/](https://www.business-humanrights.org/en/latest-news/leaked-documents-reveal-googles-alleged-awareness-of-human-rights-risks-in-israels-project-nimbus-deal/) 7. Two months of cruel siege are further evidence of Israel's genocidal ..., acessado em maio 27, 2025, [https://www.amnesty.org/en/latest/news/2025/05/israel-opt-two-months-of-cruel-and-inhumane-siege-are-further-evidence-of-israels-genocidal-intent-in-gaza/](https://www.amnesty.org/en/latest/news/2025/05/israel-opt-two-months-of-cruel-and-inhumane-siege-are-further-evidence-of-israels-genocidal-intent-in-gaza/) 8. Israel/Palestine: An Abyss of Human Suffering in Gaza | Human ..., acessado em maio 27, 2025, [https://www.hrw.org/news/2025/01/16/israel/palestine-abyss-human-suffering-gaza](https://www.hrw.org/news/2025/01/16/israel/palestine-abyss-human-suffering-gaza) 9. “More than a human can bear”: Israel's systematic use of sexual ..., acessado em maio 27, 2025, [https://www.ohchr.org/en/press-releases/2025/03/more-human-can-bear-israels-systematic-use-sexual-reproductive-and-other](https://www.ohchr.org/en/press-releases/2025/03/more-human-can-bear-israels-systematic-use-sexual-reproductive-and-other) 10. Microsoft confirms it's providing AI and cloud services to Israeli military for war in Gaza, acessado em maio 27, 2025, [https://www.datacenterdynamics.com/en/news/microsoft-confirms-its-providing-ai-and-cloud-services-to-israeli-military-for-war-in-gaza/](https://www.datacenterdynamics.com/en/news/microsoft-confirms-its-providing-ai-and-cloud-services-to-israeli-military-for-war-in-gaza/) 11. Israel/OPT: Microsoft confirms it provides cloud & artificial services to Israeli Defence Ministry amid ongoing war on Gaza - Business & Human Rights Resource Centre, acessado em maio 27, 2025, [https://www.business-humanrights.org/en/latest-news/israelopt-microsoft-confirms-it-provides-cloud-artificial-services-to-israeli-defence-ministry-amid-ongoing-war-on-gaza/](https://www.business-humanrights.org/en/latest-news/israelopt-microsoft-confirms-it-provides-cloud-artificial-services-to-israeli-defence-ministry-amid-ongoing-war-on-gaza/) 12. Microsoft acknowledges "standard commercial relationship" with Israel Ministry of Defence, conducts internal review of AI services | GamesIndustry.biz, acessado em maio 27, 2025, [https://www.gamesindustry.biz/microsoft-acknowledges-standard-commercial-relationship-with-israel-ministry-of-defence-conducts-internal-review-of-ai-services](https://www.gamesindustry.biz/microsoft-acknowledges-standard-commercial-relationship-with-israel-ministry-of-defence-conducts-internal-review-of-ai-services) 13. How Microsoft's AI Helped Israeli Military In Its War Against Gaza, acessado em maio 27, 2025, [https://www.ndtv.com/world-news/how-microsofts-ai-helped-israeli-military-in-its-war-against-gaza-8439414](https://www.ndtv.com/world-news/how-microsofts-ai-helped-israeli-military-in-its-war-against-gaza-8439414) 14. Microsoft denies claim its AI tech was used by IDF during war to target Gazans, acessado em maio 27, 2025, [https://www.timesofisrael.com/microsoft-denies-claim-its-ai-tech-was-used-by-idf-during-war-to-target-gazans/](https://www.timesofisrael.com/microsoft-denies-claim-its-ai-tech-was-used-by-idf-during-war-to-target-gazans/) 15. Microsoft fires staffer who protested AI tech sold to IDF - JNS.org, acessado em maio 27, 2025, [https://www.jns.org/microsoft-fires-staffer-who-protested-ai-tech-sold-to-idf/](https://www.jns.org/microsoft-fires-staffer-who-protested-ai-tech-sold-to-idf/) 16. Israel developing ChatGPT-like tool that weaponizes surveillance of Palestinians, acessado em maio 27, 2025, [https://www.972mag.com/israeli-intelligence-chatgpt-8200-surveillance-ai/](https://www.972mag.com/israeli-intelligence-chatgpt-8200-surveillance-ai/) 17. Probes Reveal Depth of Big Tech Complicity in Israel's AI-Driven ..., acessado em maio 27, 2025, [https://www.commondreams.org/news/big-tech-gaza-genocide](https://www.commondreams.org/news/big-tech-gaza-genocide) 18. Big Tech companies face allegations of war crimes complicity amid Israel's war in Gaza, acessado em maio 27, 2025, [https://www.business-humanrights.org/en/latest-news/big-tech-companies-allegedly-complicit-in-war-crimes-amid-israels-war-in-gaza-incl-company-responses/](https://www.business-humanrights.org/en/latest-news/big-tech-companies-allegedly-complicit-in-war-crimes-amid-israels-war-in-gaza-incl-company-responses/) 19. Microsoft fires employee who interrupted CEO's speech to protest AI ..., acessado em maio 27, 2025, [https://apnews.com/article/microsoft-build-israel-gaza-protest-worker-fired-a395ac137b74002886b2ad727b5ae5c2](https://apnews.com/article/microsoft-build-israel-gaza-protest-worker-fired-a395ac137b74002886b2ad727b5ae5c2) 20. Microsoft faces backlash over Gaza censorship claims | king5.com, acessado em maio 27, 2025, [https://www.king5.com/article/news/local/microsoft-rebukes-claim-employee-censorship/281-f5bec02c-2f32-4e43-888d-7a4210d2589c](https://www.king5.com/article/news/local/microsoft-rebukes-claim-employee-censorship/281-f5bec02c-2f32-4e43-888d-7a4210d2589c) 21. cdn-dynmedia-1.microsoft.com, acessado em maio 27, 2025, [https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Supply-Chain-Human-Rights-Policy-Statement.pdf](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Supply-Chain-Human-Rights-Policy-Statement.pdf) 22. Modern Slavery and Human Trafficking Statement (PDF) - Microsoft, acessado em maio 27, 2025, [https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Modern-Slavery-Human-Trafficking-Statement-FY17.pdf](https://cdn-dynmedia-1.microsoft.com/is/content/microsoftcorp/microsoft/msc/documents/presentations/CSR/Modern-Slavery-Human-Trafficking-Statement-FY17.pdf) 23. Israel – Hamas 2024 Symposium - The Gospel, Lavender, and the Law of Armed Conflict, acessado em maio 27, 2025, [https://lieber.westpoint.edu/gospel-lavender-law-armed-conflict/](https://lieber.westpoint.edu/gospel-lavender-law-armed-conflict/) 24. The Gospel: Israel turns to a new AI system in the Gaza war | Israel ..., acessado em maio 27, 2025, [https://www.aljazeera.com/program/the-listening-post/2023/12/9/the-gospel-israel-turns-to-a-new-ai-system-in-the-gaza-war](https://www.aljazeera.com/program/the-listening-post/2023/12/9/the-gospel-israel-turns-to-a-new-ai-system-in-the-gaza-war) 25. Israel uses AI chatbot 'Genie' to identify targets in Gaza war, acessado em maio 27, 2025, [https://www.newarab.com/news/israel-uses-ai-chatbot-genie-identify-targets-gaza-war](https://www.newarab.com/news/israel-uses-ai-chatbot-genie-identify-targets-gaza-war) 26. AI-assisted targeting in the Gaza Strip - Wikipedia, acessado em maio 27, 2025, [https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_Gaza_Strip](https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_Gaza_Strip) 27. Israel's AI Deployment in Gaza and Lebanon Wars - مركز المستقبل, acessado em maio 27, 2025, [https://futureuae.com/var.zip/Mainpage/Item/9735/blind-technology-israels-ai-deployment-in-gaza-and-lebanon-wars](https://futureuae.com/var.zip/Mainpage/Item/9735/blind-technology-israels-ai-deployment-in-gaza-and-lebanon-wars) 28. AI targeting in Gaza and beyond - Project Ploughshares, acessado em maio 27, 2025, [https://ploughshares.ca/ai-targeting-in-gaza-and-beyond/](https://ploughshares.ca/ai-targeting-in-gaza-and-beyond/) 29. Algorithmic targeting: the role of artificial intelligence in Israeli strikes in Gaza and its ethical implications, acessado em maio 27, 2025, [https://www.grip.org/algorithmic-targeting-the-role-of-artificial-intelligence-in-israeli-strikes-in-gaza-and-its-ethical-implications/](https://www.grip.org/algorithmic-targeting-the-role-of-artificial-intelligence-in-israeli-strikes-in-gaza-and-its-ethical-implications/) 30. 'A mass assassination factory': Inside Israel's calculated bombing of ..., acessado em maio 27, 2025, [https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/](https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/) 31. The Dehumanization of ISR: Israel's Use of Artificial Intelligence In Warfare, acessado em maio 27, 2025, [https://georgetownsecuritystudiesreview.org/2025/01/09/the-dehumanization-of-isr-israels-use-of-artificial-intelligence-in-warfare/](https://georgetownsecuritystudiesreview.org/2025/01/09/the-dehumanization-of-isr-israels-use-of-artificial-intelligence-in-warfare/) 32. AI in Israel's war on Gaza - Access Now, acessado em maio 27, 2025, [https://www.accessnow.org/publication/artificial-genocidal-intelligence-israel-gaza/](https://www.accessnow.org/publication/artificial-genocidal-intelligence-israel-gaza/) 33. Questions and Answers: Israeli Military's Use of Digital Tools in ..., acessado em maio 27, 2025, [https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza](https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza) 34. Examining the Malign Use of AI: A Case Study Report - DigitalCommons@UNO, acessado em maio 27, 2025, [https://digitalcommons.unomaha.edu/cgi/viewcontent.cgi?article=1125&context=ncitereportsresearch](https://digitalcommons.unomaha.edu/cgi/viewcontent.cgi?article=1125&context=ncitereportsresearch) 35. Live updates: Israel lets food into Gaza even as its forces attack a hospital, Palestinians say - AP News, acessado em maio 27, 2025, [https://apnews.com/article/mideast-israel-gaza-latest-819b2841d03136907357ca52c91ee859](https://apnews.com/article/mideast-israel-gaza-latest-819b2841d03136907357ca52c91ee859) 36. Israel's use of human shields in Gaza is widespread, sources say ..., acessado em maio 27, 2025, [https://apnews.com/article/israel-palestinians-hamas-war-army-human-shields-80f358dd2c87a1123f26ffada159701c](https://apnews.com/article/israel-palestinians-hamas-war-army-human-shields-80f358dd2c87a1123f26ffada159701c) 37. THE ACCOUNTABILITY OF SOFTWARE DEVELOPERS FOR WAR CRIMES INVOLVING AUTONOMOUS WEAPONS: THE ROLE OF THE JOINT CRIMINAL ENTERPRISE - University of Pittsburgh Law Review, acessado em maio 27, 2025, [https://lawreview.law.pitt.edu/ojs/lawreview/article/download/822/510/1754](https://lawreview.law.pitt.edu/ojs/lawreview/article/download/822/510/1754) 38. About Human Rights at Google - Google - About Google, acessado em maio 27, 2025, [https://about.google/company-info/human-rights/](https://about.google/company-info/human-rights/) 39. Google Code of Conduct - Alphabet Investor Relations, acessado em maio 27, 2025, [http://abc.xyz/investor/google-code-of-conduct](http://abc.xyz/investor/google-code-of-conduct) 40. Big Tech companies face allegations of war crimes complicity amid Israel's war in Gaza, acessado em maio 27, 2025, [https://www.business-humanrights.org/my/latest-news/big-tech-companies-allegedly-complicit-in-war-crimes-amid-israels-war-in-gaza-incl-company-responses/](https://www.business-humanrights.org/my/latest-news/big-tech-companies-allegedly-complicit-in-war-crimes-amid-israels-war-in-gaza-incl-company-responses/) 41. Understanding Corporate Complicity: Extending the Notion beyond Existing Laws - Amnesty International, acessado em maio 27, 2025, [https://www.amnesty.org/ar/wp-content/uploads/2021/08/pol340012006en.pdf](https://www.amnesty.org/ar/wp-content/uploads/2021/08/pol340012006en.pdf)