Install
1. Gateway: Any Model, One API
Stop juggling API keys. Point any OpenAI-compatible client atinference.hud.ai and use Claude, GPT, Gemini, or Grok:
2. Environments: Your Code, Agent-Ready
A production API is one live instance with shared state—you can’t run 1,000 parallel tests without them stepping on each other. Environments spin up fresh for every evaluation: isolated, deterministic, reproducible. Each generates training data. Turn your code into tools agents can call. Define scenarios that evaluate what agents do:3. Evals: Test and Improve
Run your scenario with different models. Compare results:4. Deploy and Scale
Push your environment to GitHub, connect it on hud.ai, and run thousands of evals in parallel. Every run generates training data.Next Steps
Gateway
One endpoint for every model. Full observability.
Environments
Tools, Scenarios, and local testing.
A/B Evals
Variants, groups, and finding what works.
Deploy
Run at scale. Generate training data.