Abstract
Artificial General Intelligence (AGI) is often conceived as a self-contained system whose generality derives from increasingly powerful internal architectures. This paper challenges that assumption by developing an alternative, extended conception of AGI inspired by theories of active externalism and the extended mind. On this view, general intelligence is grounded not primarily in internal computation, but in a system’s capacity to construct and exploit extended mechanisms that incorporate environmental resources into its cognitive routines. We illustrate this idea with reference to the capabilities of large language models (LLMs), focusing on linguistic self-stimulation, tool use, and the creation of ad hoc computational tools. Across all these cases, language plays a peculiarly important role in enabling LLMs to leverage extended mechanisms that augment their cognitive and computational performances.