The Cognitive-Warfare Research-Policy Gap: A 6–12 Month Window
Strategic Intelligence Assessment | intelligencenotes.com
Bottom Line Up Front
The cognitive-warfare research community has converged on a stable conceptual vocabulary. National regulatory frameworks are catching up. Operational defenders, in many institutions, are still working from 2022-era playbooks. The asymmetry between what is known and what is operationally deployed defines the actionable opportunity for any institution willing to commit to evidence-first defensive programs in the next 6–12 months. Brazil enters its 2026 electoral cycle inside this gap, with intelligence-community alerts public and regulatory cover in place — but with operational-defense maturity uneven across the three contested surfaces of electoral infrastructure, election-worker pipelines, and platform algorithmic layers.1
The gap is not primarily an investment problem. It is a translation problem: research has produced typologies, attribution methodologies, and counter-narrative frameworks that have not yet been refactored into deployable defensive procedures inside Brazilian and broader Latin American institutions. The strategic implication is that the institutions that close this gap first — by treating cognitive-warfare countermeasures as operational infrastructure rather than communications strategy — gain a structural defensive advantage that compounds across multiple election cycles.
The Vocabulary Has Stabilized — Under FIMI
The European External Action Service formally adopted Foreign Information Manipulation and Interference (FIMI) as its canonical analytical category through a body of work conducted across 2022 and published as the 1st EEAS Report on FIMI Threats on 7 February 2023.2 The framework explicitly replaced the earlier “disinformation” terminology because disinformation centered on the content (true vs. false), while FIMI centers on the behavior (manipulation, coordination, intent, attribution). The shift matters operationally: it allows defenders to act on behavioral indicators — coordinated inauthentic behavior, persona-network amplification, cross-platform laundering — without first having to litigate the truth-status of any specific claim.
Subsequent reporting cycles (2nd Report January 2024, 3rd Report March 2025) have stabilized the FIMI methodology around four pillars: incident-data collection (DISARM-framework-compatible), tactical-technical-operational analysis, attribution to threat actors, and exposure of the underlying behavioral pattern. NATO Strategic Communications Centre of Excellence in Riga, the Atlantic Council’s Digital Forensic Research Lab, the RAND Corporation, and Carnegie Endowment’s Digital Democracy program operate on broadly compatible analytical scaffolds. The research community is, on the substance, aligned — and the 2025 hybrid-threat publication cycle from GLOBSEC, IISS, and RUSI converges on the same operational vocabulary.3
What Defenders Are Actually Running
The implementation gap shows up in three surfaces.
The 2022-era playbook baseline — what most operational defenders are still running — was built around the Strengthened EU Code of Practice on Disinformation, presented to the European Commission on 16 June 2022 by 34 signatories with 44 commitments and 128 specific measures,4 together with the late-stage Stanford Internet Observatory election-integrity frameworks and DFRLab incident-response playbooks. These frameworks were content-centric, platform-mediated, and assumed durable cooperation between platforms and researchers. Two structural shifts have invalidated their core assumptions: first, the dissolution of major platform-trust-and-safety teams across 2023–2024 broke the cooperation channel; second, generative-AI-enabled persona networks have collapsed the unit cost of synthetic content from specialist-only to commodity, breaking the volume assumption underneath manual-review pipelines.
The FIMI / DISARM-aligned defensive architecture that has emerged in research and is now being adopted in selected European institutions involves: behavioral-indicator monitoring rather than content-truth assessment, attribution graphs rather than single-incident debunking, prebunking infrastructure rather than reactive correction, and red-team / blue-team exercises rather than after-action reporting. These four shifts together are the substantive content of the research-policy gap. Each is implementable with existing tooling. Each requires institutional commitments — staff training, procedural redesign, cross-organization coordination — that most LATAM and Brazilian institutions have not yet made.
The Brazilian regulatory layer has moved faster than operational maturity. The Tribunal Superior Eleitoral codified election-cycle prohibitions on AI-generated deepfakes, candidate-simulating chatbots, and synthetic content via Resolution 23.610/2019 (the canonical electoral-propaganda framework), Resolution 23.732/2024 (which added the deepfake prohibition and AI-disclosure labeling requirements for the 2024 cycle), and Resolution 23.755/2026 (the operative instrument for the 2026 cycle, which adds a 72-hour pre-election / 24-hour post-election AI-content blackout window and prescribes administrative fines under Article 57-D of Lei 9.504/1997).5 The legal cover exists. The enforcement capacity is the binding constraint, as the Edition 002 Lead Story noted explicitly.
Where Brazil Sits in the Gap
Brazilian cognitive-defense maturity is asymmetric across the three contested surfaces.
Electoral-court infrastructure (cyber). Most mature. The TSE has operated under sustained cyber-pressure since 2018 and has hardened its core voting-system surface. The cyber-defense doctrine is institutionalized.
Election-worker pipelines (social engineering). Less mature. Municipal-level training, vetting, and operational-security procedures vary widely across Brazil’s 5,570+ municipalities. This is the surface where adversaries with FIMI-aligned playbooks gain leverage: coordinated coercion, social-engineering-based persona infiltration, micro-targeted disinformation framed as operational instructions.
Platform algorithmic surfaces (coordinated inauthentic behavior). Least mature. Brazilian institutions have limited direct visibility into platform algorithmic dynamics, depend on platform self-disclosure (which has degraded since 2023), and lack the cross-organization-attribution graphs that European institutions like EEAS StratComm have built. This is also the surface where the Aquatic Panda threat-actor cluster — a China-nexus group attributed by CrowdStrike to a Chinese contractor and assessed as having likely targeted South America-based entities including Brazilian organizations across the 2022–2024 window — has documented baseline intent and capability.6
Russian-aligned narrative operations on Brazilian Western-alignment topics remain present in open monitoring, calibrated to amplify pre-existing social fractures rather than manufacture new ones.
What to Watch (the 6–12 Month Window)
Three indicators will signal whether Brazil — and the broader LATAM region — closes the gap before the 2026 electoral cycle runs hot.
First, TSE enforcement against deepfake / chatbot / synthetic-content violations. The first cases under the existing regulatory framework will set precedent. If enforcement is reactive and slow, the regulatory layer will not deter; if it is fast and visible, it will partially compensate for operational-defense gaps elsewhere.
Second, FIMI-aligned operational adoption in any Brazilian institution. Watch for the first Brazilian institution — academic, governmental, or private — that publishes a FIMI / DISARM-aligned incident report. That publication will mark the moment the research-vocabulary penetrates operational defense in the region.
Third, cross-institutional attribution coordination. Whether ABIN, the Federal Police, the TSE, and major Brazilian universities establish an attribution-coordination mechanism on the model of the EU Rapid Alert System will determine whether Brazil’s 2026 cognitive battlespace is contested by an aligned defensive coalition or by uncoordinated single-institution responses.
The window is 6–12 months. The institutions that move first establish the operational baseline that the rest of the region copies; the institutions that move late inherit a defense posture that was already invalidated when they adopted it.
Footnotes
Footnotes
-
Edition 002 — Brazil 2026: The Cognitive Battlespace, Lead Story; ABIN public posture on “malicious actions to delegitimize the electoral model”; Control Risks Brazil electoral-protection desk monitoring April 2026. (Confidence: High — multiple independent sources cited in the source brief.) ↩
-
European External Action Service, 1st EEAS Report on Foreign Information Manipulation and Interference Threats, published 7 February 2023 —
https://www.eeas.europa.eu/eeas/1st-eeas-report-foreign-information-manipulation-and-interference-threats_en. The report formalized FIMI as the EU’s canonical analytical category, distinguishing it from the earlier content-centric “disinformation” framing. The methodology has stabilized across two subsequent annual reports: 2nd EEAS Report on FIMI Threats (23 January 2024) —https://www.eeas.europa.eu/eeas/2nd-eeas-report-foreign-information-manipulation-and-interference-threats_en— and 3rd EEAS Report on FIMI Threats (19 March 2025) —https://www.eeas.europa.eu/eeas/3rd-eeas-report-foreign-information-manipulation-and-interference-threats-0_en. (Confidence: High — three independent primary publications at primary domain.) ↩ -
Three convergent 2025 think-tank publications anchor the analytical-vocabulary alignment claim: GLOBSEC, GLOBSEC Trends 2025: Ready for a New Era? (May 2025) —
https://www.globsec.org/what-we-do/publications/globsec-trends-2025-ready-new-era; IISS, Russia’s Information Confrontation Doctrine in Practice (2014–Present): Intent, Evolution and Implications by Voo & Singh (June 2025) —https://www.iiss.org/research-paper/2025/06/russias-information-confrontation-doctrine-in-practice-2014present-intent-evolution-and-implications/; RUSI, Russia, AI and the Future of Disinformation Warfare by Wallner, Copeland & Giustozzi (30 June 2025) —https://www.rusi.org/explore-our-research/publications/emerging-insights/russia-ai-and-future-disinformation-warfare. (Confidence: High — three independent primary publications; editorially-distinct organizations.) ↩ -
European Commission, 2022 Strengthened Code of Practice on Disinformation, presented to the Commission on 16 June 2022 by 34 signatories. The strengthened version contains 44 commitments and 128 specific measures and was subsequently integrated into the Digital Services Act enforcement framework on 13 February 2025 —
https://digital-strategy.ec.europa.eu/en/library/2022-strengthened-code-practice-disinformation. (Confidence: High — primary document at the EU Commission digital-strategy domain.) ↩ -
Tribunal Superior Eleitoral (Brasil): (a) Resolução nº 23.610/2019 (18 December 2019), the canonical electoral-propaganda framework —
https://www.tse.jus.br/legislacao/compilada/res/2019/resolucao-no-23-610-de-18-de-dezembro-de-2019; (b) Resolução nº 23.732/2024 (27 February 2024), which amended 23.610 to add Art. 9º-C deepfake prohibition and Art. 9º-B AI-disclosure-labeling and candidate-simulating-chatbot ban for the 2024 cycle —https://www.tse.jus.br/legislacao/compilada/res/2024/resolucao-no-23-732-de-27-de-fevereiro-de-2024; (c) Resolução nº 23.755/2026 (2 March 2026), the operative instrument for the 2026 cycle, prescribing the 72-hour pre-election / 24-hour post-election AI-content blackout and administrative fines under Article 57-D of Lei 9.504/1997 — content confirmed via TSE communications pagehttps://www.tse.jus.br/comunicacao/noticias/2026/Abril/por-dentro-das-eleicoes-conheca-as-regras-sobre-uso-de-ia-na-campanha-eleitoral-de-2026. (Confidence: High for 23.610 and 23.732; High for 23.755 content with a Low gap on the canonical compiled-text URL pending direct retrieval from the TSE compiled-resolutions index.) ↩ -
CrowdStrike, 2025 Latin America Threat Landscape Report, 19 May 2025 —
https://www.crowdstrike.com/en-us/blog/2025-latam-threat-landscape-report-deep-dive/. Source attributes the Aquatic Panda cluster to a Chinese contractor and assesses that the cluster “likely conducted reconnaissance against entities in Brazil” in 2023, with the broader assessment that the actor “has likely targeted South America-based entities from 2022 to 2024.” Hedge language (“likely”) preserved per source. CrowdStrike’s separate adversary-profile page for Aquatic Panda (https://www.crowdstrike.com/en-us/adversaries/aquatic-panda/) lists Asia as primary focus and does not mention LATAM at page level — the LATAM attribution is anchored in the regional-survey product. (Confidence: Medium — single-source attribution at primary-grade vendor; one supplementary Microsoft / Mandiant corroboration would raise to High.) ↩