Abstract
The term "autonomous artificial intelligence" pervades industry, policy, and scholarship, yet no formal argument has established that any existing or near-term system satisfies the minimal philosophical conditions for autonomy developed in the philosophy of action (Kant, 1785; Frankfurt, 1971; Raz, 1986; List & Pettit, 2011). This article demonstrates that current systems—large language models, reinforcement-learning agents, self-driving vehicles, and multi-agent frameworks—are heteronomous instruments rather than autonomous agents. We introduce the Autonomy Threshold Theorem (ATT): a system is autonomous if and only if its terminal evaluation function is endogenously generated and architecturally closed (i.e., no external entity retains override authority over what counts as success). The paper establishes precise criteria for system boundaries, addresses the objection that human autonomy likewise depends on externally specified (evolutionary) criteria, and examines whether genuine machine autonomy is achievable in principle. Because every deployed system relies on exogenously imposed loss functions, reward signals, or human oversight for terminal evaluation, all fail the threshold. We further show that operational independence scales continuously while evaluative independence is binary, rendering graded "levels of autonomy" frameworks (SAE J3016, DoD taxonomies) conceptually incoherent. Thirteen common objections are refuted. Implications include: legal liability remains with human principals, regulatory language requires reform, and AI safety efforts should target instrument reliability rather than hypothetical agent alignment.