# Source Verification Framework
## BLUF
**Source verification** is the disciplined assessment of a source's *reliability* (can this source be trusted?) and the *credibility* of each specific piece of reported information (is this specific claim accurate?). The two are independent: a highly reliable source can report incorrect information; an unreliable source can report correct information. Both must be assessed for every analytical use of open-source material. The NATO Admiralty Code — a two-dimensional grading system dating from World War II — remains the clearest operational standard. Combined with triangulation, provenance tracing, and explicit confidence calibration, source verification is the methodology that distinguishes OSINT intelligence production from information aggregation.
---
## The Admiralty Code
Developed by the British Admiralty for wartime intelligence evaluation and adopted by NATO, the Admiralty Code evaluates every intelligence item along two dimensions:
### Source Reliability (Alphabetical)
| Grade | Meaning | Criteria |
|---|---|---|
| **A** | Completely reliable | Established track record; authoritative institution; direct access |
| **B** | Usually reliable | Proven reliability on most occasions; consistent pattern of accurate reporting |
| **C** | Fairly reliable | Some reliability; specific domain competence; mixed track record |
| **D** | Not usually reliable | Pattern of inaccuracy; questionable access; biased reporting |
| **E** | Unreliable | Known falsification or bias; hostile disinformation source |
| **F** | Reliability cannot be judged | New source; insufficient track record to evaluate |
### Information Credibility (Numerical)
| Grade | Meaning | Criteria |
|---|---|---|
| **1** | Confirmed by other sources | Independent corroboration from sources of established reliability |
| **2** | Probably true | Consistent with other reporting; logical context; no contradictory evidence |
| **3** | Possibly true | Plausible; not confirmed; some consistency with other reporting |
| **4** | Doubtful | Inconsistent with other reporting; implausible in context |
| **5** | Improbable | Contradicted by reliable reporting; highly implausible |
| **6** | Truth cannot be judged | Insufficient context or collateral for evaluation |
**Notation:** Every intelligence item is tagged with a letter-number pair (e.g., **B-2**). This immediately tells the consumer how much weight to give the report.
**Example:**
- **A-1:** Confirmed by UN special rapporteur AND Bellingcat (multiple authoritative independent sources)
- **B-2:** Consistent reporting from two reliable journalists with track records in the region
- **C-3:** Plausible claim by a local media outlet with domain knowledge but mixed accuracy record
- **F-6:** Anonymous Telegram channel, new account, claim cannot be evaluated without additional sourcing
---
## Source Reliability Assessment
### Track Record Analysis
For any source — individual, outlet, or platform:
- **Past accuracy:** Has this source's prior reporting held up under scrutiny? How often have claims been retracted or corrected?
- **Transparency:** Does the source document its evidentiary base? Does it issue public corrections?
- **Domain competence:** Is this a general news source or a specialist with established expertise in the specific domain?
### Access Evaluation
- **Direct vs. inferred:** Does the source have direct access (presence at the event; firsthand documentation) or is reporting derived from others?
- **Geographic presence:** Does the source's reporting infrastructure reach the target location, or is it filing from a distance based on secondary information?
- **Language capability:** For conflict zone reporting, does the outlet have journalists who speak local languages and have local networks?
### Incentive Analysis
Every source has incentives that shape what they report and how:
- **Political alignment:** State-aligned media (RT, CGTN, Press TV) will systematically bias toward state narratives; this doesn't mean everything they publish is false, but it affects what they choose to report and how they frame it
- **Commercial pressure:** Outlets dependent on engagement may over-dramatize; outlets dependent on access may underreport to preserve source relationships
- **Ideological alignment:** Outlets with strong ideological positions systematically select facts that support their narrative
**Critical discipline:** Incentive analysis is not an excuse to dismiss sources wholesale. It is a tool for identifying *which types of claims* a source is likely to report accurately vs. biasedly.
### Institutional Context
- **Editorial standards:** Does the outlet have published editorial policies? Fact-checking process? Independent editorial structure from ownership?
- **Ownership and funding:** Who owns the outlet? Who funds operations? Undisclosed state funding is a significant reliability indicator
- **Legal jurisdiction:** Outlets operating under authoritarian legal systems cannot publish content that contradicts state narrative; their absence of contradictory reporting is not evidence of accuracy
---
## Information Credibility: Triangulation
No single-source claim should be treated as confirmed intelligence, regardless of source reliability.
### Independent Source Requirement
For any factual claim:
- **Minimum two independent sources**
- **Independence means genuinely independent origin** — not the same original report republished by multiple outlets
- **Cross-linguistic corroboration** is stronger than same-language corroboration (an English-language outlet and a Russian-language outlet reporting the same fact based on different sources is stronger evidence than two English outlets citing the same Russian original)
### Provenance Tracing
Before citing a piece of evidence, trace it back to its original source:
- Who first reported this?
- What was their evidentiary basis?
- Have subsequent reports added corroboration or just repeated the original?
**Common failure mode:** "Reported by multiple outlets" where all outlets are citing the same single source. This is amplified single-sourcing, not corroboration.
### Cross-Category Corroboration
Stronger than multiple sources in the same category:
- Visual (photo/video/satellite) + testimonial (direct interview) + documentary (leaked documents)
- Different types of evidence independently converging on the same conclusion provide higher confidence than multiple sources of the same type
---
## Metadata Analysis
### EXIF and File Metadata
Images and videos carry metadata by default:
- Capture device (camera model, often specific unit)
- Capture timestamp
- GPS coordinates (if enabled)
- Editing history (software used, modifications)
**Verification use:**
- Consistency check: does the metadata match the claimed capture context?
- Platform stripping: most social platforms strip EXIF; raw files uploaded elsewhere preserve it
- Forgery detection: manipulated metadata often shows inconsistencies (timestamp before camera model's release date)
### Digital Forensics for Doctored Content
- **Reverse image search** (TinEye, Yandex Images, Google) — detect if the image appeared earlier in a different context
- **Error level analysis** — JPEG compression artifacts reveal editing
- **InVID/WeVerify** — video-specific verification including keyframe extraction
- **AI detection tools** (FakeCatcher, various research tools) — detect generative AI content, though rapidly outpaced by generation technology
---
## Specific Source Categories
### Social Media
**Evaluation criteria:**
- Account age and posting history (new account = unreliable until track record established)
- Posting pattern consistency (sudden bursts of activity often indicate operation, not organic use)
- Network analysis (who follows, who this account follows, coordination patterns)
- Cross-platform consistency (is this person/account reliable across different platforms?)
**Red flags:**
- Accounts created recently but claiming to speak authoritatively about ongoing events
- Accounts that primarily repost without adding analysis or original reporting
- Accounts in coordinated networks ([[02 Concepts & Tactics/Bot Networks]])
### State Media
Not automatically unreliable, but require explicit incentive analysis:
- Reporting about domestic politics: unreliable on regime-critical topics
- Reporting about foreign conflicts: often reliable on operational facts, unreliable on attribution and framing
- Reporting about third parties: frequently reliable (and sometimes the only source)
**Example:** Chinese state media reports on Indian troop movements may be operationally accurate even if framed to support CCP messaging. Russian state media reports on Western political dynamics may be accurate on facts while biased in framing.
### Leaked Documents
Leaked intelligence, diplomatic, or corporate documents (Snowden, Manning, Panama Papers, Vault 7) are high-value but require specific verification:
- **Authentication:** Are the documents genuine? (Authentication has been done publicly for major leaks)
- **Selection bias:** What was NOT leaked? Leaks are typically incomplete; the leaker's selection criteria bias what's available
- **Staleness:** Documents from years ago may not reflect current operations
- **Publisher framing:** Leak publishers (journalists, activists) interpret and contextualize; the interpretation is separate from the document itself
### Anonymous Sources
The weakest category; evaluate extreme caution:
- **Is access plausible?** Does the claimed position give the source the claimed access?
- **Is motivation assessable?** Why is this source sharing information anonymously? What do they gain?
- **Can the claim be tested?** Can corroboration be sought through non-anonymous sources or physical evidence?
---
## Operational Checklist
Before citing any open-source claim as intelligence:
- [ ] Identified the original source (not just the re-posting outlet)
- [ ] Assessed source reliability (Admiralty Code A–F)
- [ ] Assessed information credibility (Admiralty Code 1–6)
- [ ] Identified at least one independent corroborating source (or explicitly flagged single-sourcing)
- [ ] Checked provenance (who saw it first; has it been superseded?)
- [ ] Considered adversarial manipulation possibility
- [ ] Documented the verification chain (URLs, screenshots, timestamps)
- [ ] Calibrated confidence level per [[08 Guides & Manuals/Analytical Frameworks/Intelligence Confidence Levels|IC standards]]
---
## Key Connections
- [[02 Concepts & Tactics/OSINT]] — the discipline this methodology enables
- [[08 Guides & Manuals/OSINT Methodologies/Geolocation Methodology]] — verification of geographic claims specifically
- [[08 Guides & Manuals/Analytical Frameworks/Intelligence Confidence Levels]] — how verified source quality translates to reporting confidence
- [[08 Guides & Manuals/Analytical Frameworks/Analysis of Competing Hypotheses]] — the analytical method verification feeds into
- [[08 Guides & Manuals/Operational Manuals/Open-Source Intelligence Manual]] — the parent operational document
- [[02 Concepts & Tactics/Disinformation Campaign]] — the threat verification defends against
- [[02 Concepts & Tactics/Active Measures]] — adversarial operations specifically designed to defeat verification