Abstract
Quantitative intelligence analysis often leans on pattern recognition, but in adversary‑shaped environments correlations can be engineered. Building on Judea Pearl’s structural causal models, this article is makes identification, not estimation, the gate to credible claims. It shows when effects are recoverable from observational data and when leverage—mediators, instruments, modest interventions, or transport adjustments—is required. The framework unifies association, intervention, and counterfactuals and extends to sequential and multi‑agent settings (Elias Bareinboim’s causal reinforcement learning and causal game theory), where strategies are modelled as interventions and evaluated under adaptation. Across operational cases, analysts specify a causal graph, test its implications, determine identifiability, and only then estimate. This identification‑first discipline separates artefacts from effects, hardens AI/ML pipelines against manipulation, and ties analysis directly to ‘decision advantage’: identify → estimate → decide → iterate. The result is an explicit, testable mapping from action to outcome that turns uncertain signals into reliable policy support. Echoing Sherman Kent, intelligence analysis earns trust when it distinguishes observation from inference and states what is known—and unknown—up front.