Coding for the Agentic World

The future of software development is here, and it’s agentic.

We’re moving beyond simple chatbots to AI systems that can plan, execute, and collaborate—transforming how we build software. Join us for an intensive exploration of the tools, workflows, and architectures defining this next era of programming.

Because of the success of our first AI Codecon event, we’re planning for even more content in September, and even more participants. The primary conference track will be arranged much like the May event: a curated collection of fireside chats with senior technical executives, brilliant engineers, and entrepreneurs; practical talks on the new tools, workflows, and hacks that are shaping the emerging discipline of agentic AI; and demos of how experienced developers are using the new tools to supercharge their productivity, their innovative applications, and user interfaces. In addition, we’ll have a suite of in-depth tutorials on separate days both before and after the main conference so that you can go deeper if you want. We’re also planning a separate event for the purpose of demonstrating—via compelling, fast-paced demos—how MCP is creating the architecture of participation for AI systems. We’re calling the event O’Reilly Demo Day, and we’d love for you to attend.

In the September 9 AI Codecon, our primary focus will be on four critical frontiers of agentic development:

  • Agentic interfaces: Moving beyond chat UX to sophisticated agent interactions
  • Tool-to-tool workflows: How agents chain across environments to complete complex tasks
  • Background coding agents: Asynchronous, autonomous code generation in production
  • MCP and agent protocols: The infrastructure enabling the agentic web

We guarantee you’ll walk away with practical knowledge drawn from real-world case studies and hands-on demos of cutting-edge tools. to save your seat.

Schedule

We’re still working on finalizing the schedule for this event. Please check back closer to the event date for more information.

  1. Introduction 10 min

    Tim welcomes you to Coding for the Agentic World.

  2. Part 1: Agentic Interfaces
  3. Why Centralized AI Is Not Our Inevitable Future 15 min

    Tim O’Reilly has been writing and speaking about the need for “an architecture of participation” for AI, and how the endgame is not a single dominant platform. Alex Komoroske, CEO of Common Tools, proposes a path to that future, which includes data sovereignty, open ecosystems, and personalized AI that works exclusively for the user. He’ll explore the structural dangers of a centralized AI future and present a compelling case for a distributed, pluralistic AI ecosystem where human agency and privacy are paramount. His goal is to encourage developers to build tools that empower individuals rather than a few large companies.

  4. Fireside Chat 20 min

    NotebookLM was one of the first breakout successes beyond simple chatbots and coding tools. Besides the originality of some of the things it does, it’s a great illustration of how to rethink existing apps and services to give them a truly useful AI native interface. Josh Woodward, who leads Google Labs and the Gemini app, shares the origin story of NotebookLM and the lessons developers might take from it. And of course, Josh and Tim are going to talk about what’s next.

  5. Context Engineering 15 min

    In Andrej Karpathy’s words, context engineering is “one small piece of an emerging thick layer of non-trivial software” that powers real LLM apps. It’s the evolution of prompt engineering, reflecting a broader, more system-level approach. Addy Osmani, engineering leader at Google, explores this practical, systems-based approach to structuring all the information an LLM needs to perform reliably and usefully. You’ll hear how you can dynamically assemble task-specific context using data, instructions, tools, and memory, and how good input design translates to better, more trustworthy output.

  6. Death of the Browser 15 min

    The internet has turned into a maze of walled gardens, ad-powered content, and algorithmic manipulation. AI agents are the key to putting users back in the driver’s seat. React team alum and web standards advocate Rachel-Lee Nabors explores how agents can create interfaces that adapt to a user’s needs and preferences while providing privacy and accessibility features that traditional browsers can’t match. Rachel-Lee also discusses the technical implications of this shift and explores emerging and existing technologies and methods for direct content distribution and access.

  7. Break 5 min
  8. Part 2: Tool-to-Tool Workflows
  9. Session to Come 20 min

    Please check back for more information.

  10. The Hitchhiker’s Guide to AI Coding Agents (Don’t Panic!) 15 min

    In the occasionally bewildering universe of AI-powered coding tools, a new wave of agentic assistants is charting fresh territory for developers. From Claude Code and Codex CLI to Junie and Gemini CLI, these agents promise to navigate codebases, automate tasks, and even refactor your work. Ken Kousen, author of several O’Reilly books and courses, introduces the key players, shows quick demos of what they can (and can’t) do, and shares tips on making them work for you, not the other way around.

  11. Beyond Code Generation: Getting Real Work Done with AI Agents 15 min

    AI tools can write code now. That’s cool. But what about everything else? Triaging issues, writing tests, opening PRs, updating docs: This is the stuff we should be delegating to AI. Angie Jones, global VP of developer relations at Block Inc., shows you what happens when agents step out of the IDE and actually start doing work. No flashy prompts—just practical, repeatable workflows where agents take real action across environments and tools, and collaborate with other agents and models to get complex tasks done faster. If you’re using AI to help you write code, great. Now let it help you ship.

  12. Your AI Agents Don’t Need You Anymore: Building Self-Organizing, Decentralized AI Agent Infrastructure 5 min

    AI and big data technologist Amit Rustagi presents a cutting-edge architectural framework for decentralized, self-organizing AI agent systems, leveraging WebAssembly (Wasm) and IPFS to enable adaptive autonomy beyond traditional orchestration. By combining Wasm’s portable, sandboxed execution with IPFS’s decentralized storage and content addressing, the framework facilitates peer-to-peer agent coordination, evolutionary specialization, and trustless collaboration without centralized oversight. Amit also critically examines the trade-offs: Wasm’s limited hardware acceleration, IPFS latency in real-time scenarios, and the ethical implications of fully autonomous credit economies. Designed for the postcloud era, this framework offers a modular, evolution-ready infrastructure—combining the latest in decentralized compute, storage, and cryptography—while prompting urgent dialogue on safety, accountability, and the limits of emergent machine autonomy in production systems.

  13. Part 3: Background Coding Agents
  14. Fireside Chat 20 min

    Join Kathy Korevec and Addy Osmani for an inside look at how the AIDA team at Google Labs is building at the intersection of now and next. They’ll explore the making of Jules—an AI-powered developer tool designed not just to add intelligence, but to remove friction and unlock entirely new creative workflows. Learn how the team balances rapid AI evolution with thoughtful, high-integrity product design, crafting tools that anticipate developer needs without getting in the way. It’s a messy, nonlinear process—but one rooted in vision, precision, and a deep respect for developers.

  15. Break 5 min
  16. Agentic Web: MCP, A2A, MIT Project NANDA, and Beyond 10 min

    The Model Context Protocol (MCP), Agent2Agent (A2A), NLWeb, and MIT’s Project Nanda have sparked a new wave of excitement around the agentic web, setting the stage for rapid innovation and collaboration. Open protocols like MCP and A2A are enabling AI agents to interoperate seamlessly, making it easier for developers and enterprises to build powerful, agent-driven applications. But what are the opportunities in the next phases? How will agentic commerce evolve? What will be the emergent intelligence in agentic societies? Ramesh Raskar, an associate professor at MIT Media Lab, considers these questions in relation to Project Nanda, which is architecting the building blocks for the internet of AI agents in three phases—foundations, commerce, and societies.

  17. The Team of Tomorrow: Agents, Alignment, and Team Ritual 5 min

    The Model Context Protocol (MCP), Agent2Agent (A2A), NLWeb, and MIT’s Project Nanda have sparked a new wave of excitement around the agentic web, setting the stage for rapid innovation and collaboration. Open protocols like MCP and A2A are enabling AI agents to interoperate seamlessly, making it easier for developers and enterprises to build powerful, agent-driven applications. But what are the opportunities in the next phases? How will agentic commerce evolve? What will be the emergent intelligence in agentic societies? Ramesh Raskar, an associate professor at MIT Media Lab, considers these questions in relation to Project Nanda, which is architecting the building blocks for the internet of AI agents in three phases—foundations, commerce, and societies.

  18. From Proof to Platform: Scaling Agentic AI in a Fortune 300 Software Enterprise 5 min

    As agentic AI systems mature, the challenge is no longer proving they can work, but proving they can scale. Leidos is working to embed agentic capabilities into the day-to-day flow of software development across a global workforce of thousands. Brennon Bortz, chief engineer for corporate R&D at Leidos, describes how the company identified high-leverage opportunities for agentic automation within the SDLC, structured pilots to compare agentic and nonagentic performance, integrated tools into secure, developer-friendly platforms, managed adoption across a risk-sensitive enterprise, and defined new success metrics and evolving roles to support long-term change.

  19. Part 4: MCP and Agent Protocols
  20. Designing for AI Agents: MCP 15 min

    These days, software isn’t easy for people to use until it’s easy for their AI agents to use. That means we need a great MCP. Model Context Protocol servers give AI agents fingers to feel around and act in the world, one tool at a time. Jessica Kerr, engineering manager of developer relations at Honeycomb, shares design considerations from the Honeycomb MCP, contrasts MCP design with human UX and software APIs, and discusses ways to tell whether the MCP is effective in production.

  21. Blender MCP 15 min

    Blender MCP is a tool that allows AI language models like Claude to control Blender through the Model Context Protocol, reducing the complexity of 3D modeling from hours to minutes by letting users create scenes, assets, and animations through natural language prompts. The project gained significant traction with 12.8K GitHub stars and over 280K downloads. Siddharth Ahuja, creator of Blender MCP and Ableton MCP, explains how Blender MCP was made and how MCPs for creative tools enable anyone to be a creator. Examples include generating terrain, creating animated scenes, and even combining 3D modeling with music creation through an Ableton MCP integration.

  22. Fireside Chat 20 min

    Please check back for more information.

  23. Closing Remarks 5 min

    Tim and Addy close out today’s event.

schedule detail