Intelligentised Warfare

Core Definition (BLUF)

Intelligentised Warfare (智能化战争) is the doctrinal paradigm officially adopted by the People’s Liberation Army (PLA) to describe the future of conflict, characterised by the pervasive integration of Artificial Intelligence (AI), machine learning, quantum computing, and big data across all military operations. Its primary strategic purpose is to achieve decision-centric superiority—outpacing adversary decision cycles (the OODA Loop) through algorithmic command and control, human-machine teaming, and the deployment of autonomous weapons systems.

Epistemology & Historical Origins

The concept originated within the military academic circles of the People’s Republic of China, specifically the Academy of Military Sciences (AMS), serving as a dialectical evolution of the PLA’s previous doctrine of “Local War under Informationised Conditions” (Informationised Warfare). Gaining formal codification in China’s 2019 National Defence White Paper, the doctrine reflects an acute observation of US military capabilities—such as the Third Offset Strategy and Project Maven—and the rapid commercial advancement of civilian AI. Foundational theorists argue that just as mechanisation replaced attrition, and informationisation replaced mechanisation, “intelligentisation” will become the decisive factor of victory, fundamentally shifting the centre of gravity from physical destruction to cognitive and algorithmic dominance.

Operational Mechanics (How it Works)

The execution of Intelligentised Warfare relies on transforming raw data into automated, high-velocity military action through several key pillars:

  • Algorithmic Decision Support: Processing vast streams of multi-source intelligence to generate automated, optimised courses of action for commanders, radically compressing the temporal gap between sensor and shooter.
  • Human-Machine Teaming (MUM-T): Integrating human cognitive flexibility and strategic intent with machine processing speed, encompassing both physical platforms (e.g., manned aircraft commanding drone loyal wingmen) and cognitive systems (AI-assisted staff planning).
  • Autonomous & Swarm Systems: Deploying massed, self-organising clusters of Unmanned Aerial Vehicles (UAVs), Unmanned Surface Vessels (USVs), and ground robotics designed to saturate, adapt, and overwhelm adversary defences through distributed lethality.
  • Cloud-Based Command and Control (C2): Establishing an omnipresent “combat cloud” that seamlessly networks dispersed sensors, processing nodes, and strike assets across all domains into a unified, resilient kill web.

Modern Application & Multi-Domain Use

Kinetic/Military: Focuses on the deployment of hypersonic glide vehicles, autonomous loitering munitions, and robotic combat vehicles. AI is utilised to dynamically predict adversary logistics, optimise supply chains, and coordinate complex multi-axis joint fires without the bottleneck of human micromanagement.

Cyber/Signals: The execution of machine-speed Cyber Warfare and Cognitive Electronic Warfare (CEW). AI algorithms autonomously scan adversary networks for zero-day vulnerabilities, generate polymorphic malware, and dynamically adjust radar frequencies to spoof or jam sensors in milliseconds, creating a self-healing electromagnetic posture.

Cognitive/Information: The weaponisation of algorithms for Cognitive Warfare. This entails using deep learning to execute micro-targeted Psychological Operations (PsyOps), mass-produce hyper-realistic synthetic media (deepfakes) to degrade adversarial societal cohesion, and manipulate the algorithmic curation of global social media platforms to achieve total Narrative Control.

Historical & Contemporary Case Studies

Case Study 1: PLA Unmanned Swarm Testing (2020-Present) - To operationalise the doctrine, the People’s Liberation Army has conducted extensive, publicised tests of fixed-wing UAV swarms and autonomous USV flotillas. These exercises demonstrate the capability to execute self-healing network topologies, autonomous target allocation, and coordinated saturation strikes, directly aimed at testing asymmetric responses to the United States Navy’s carrier strike groups in the Indo-Pacific.

Case Study 2: Israel Defense Forces (IDF) “Algorithmic Warfare” - While Intelligentised Warfare is theoretically a PLA doctrine, its practical application is globally observable. The IDF’s utilisation of AI targeting systems (such as the “Gospel” and “Lavender” platforms) during conflicts in Gaza serves as a critical real-world proxy. These systems employ machine learning to autonomously synthesise SIGINT, IMINT, and OSINT to generate target banks at a velocity vastly exceeding human capacity, providing observing global militaries with empirical data on the operational efficacy and friction of algorithmic targeting.

Case Study 3: PLA Joint Exercises in the Western Pacific (2023–2025) — Large-scale drills simulating A2/AD and blockade operations around Taiwan integrated intelligentised C2 systems fusing satellite constellations, maritime militia vessels, and conventional naval assets. AI automatically generated firing solutions for long-range anti-ship ballistic missiles against simulated carrier strike groups, marking the doctrine’s transition from theoretical research to operational field-testing.

Case Study 4: Cognitive Operations in Regional Elections (Indo-Pacific, 2024–2025) — State-aligned actors utilised advanced algorithmic distribution networks and LLMs to micro-target demographic vulnerabilities in Taiwan and Philippines electoral environments. Demonstrates the PLA’s operationalisation of “intelligentised” sub-threshold Information Confrontation for strategic geopolitical realignment without kinetic escalation.

Intersecting Concepts & Synergies

Enables: System Destruction Warfare, Decision Superiority, OODA Loop compression, Multi-Domain Operations (MDO), Distributed Lethality, Cognitive Warfare, Algorithmic Warfare.

Counters/Mitigates: Information Overload (by filtering noise for commanders), Human cognitive fatigue, Adversary Command and Control (C2) nodes, Traditional massed force structures (via swarm saturation).

Vulnerabilities: Highly susceptible to data poisoning (Adversarial Machine Learning), algorithmic bias, and rigid adherence to flawed machine logic. The doctrine’s heavy reliance on robust, uninterrupted data links and cloud architecture makes it critically vulnerable to severe Electronic Warfare (EW) and kinetic or non-kinetic Anti-Satellite Weapons (ASAT). Furthermore, the doctrine risks “flash combat” scenarios where interacting autonomous systems escalate conflicts at machine speed, bypassing human diplomatic off-ramps.