This content isn’t available here
Access this content and more in the LinkedIn app
Together AI is a research-driven AI cloud infrastructure provider. Our purpose-built GPU cloud platform empowers AI engineers and researchers to train, fine-tune, and run frontier class AI models. Our customers include leading SaaS companies such as Salesforce, Zoom, and Zomato, as well as pioneering AI startups like ElevenLabs, Hedra, and Cartesia. We advocate for open source AI and believe that transparent AI systems will drive innovation and create the best outcomes for society.
External link for Together AI
251 Rhode Island St
Suite 205
San Francisco, California 94103, US
The energy in NYC was electric! 🏁 Thank you to the folks who joined us for our Executive AI Cocktail Hour, hosted by Together AI in collaboration with Runware and Bria AI. Industry leaders discussed effective approaches for AI solutions — and that’s not all, our SVP, Strategic Partnerships, Arielle Fidel, highlighted Together’s track record powering pioneers on the AI Frontier! We appreciate the conversations, and thank you for attending! If you couldn’t make it, keep an eye out for where we’ll be next: https://bit.ly/3J3UXOw
At Together, we know startups are at the forefront of building transformative, AI-native apps — which is why we’re thrilled to announce the Together AI Startup Accelerator, offering full support to help AI-native startups confidently build and scale their apps! The program offers: 📈GTM support 💳Platform credits 🧠Engineering expertise 👥Peer community access Our inaugural cohort is well underway, so if you’re ready to take your AI startup to the next level, learn why our accelerator is the perfect resource for you.
Together AI has been recognized as a LinkedIn Top Startup! We’re honored to receive this recognition and want to thank the people behind the technology: our world-class employees. Their unwavering dedication to our customers, commitment to cutting-edge AI research and passion for what they do has enabled us to deliver a platform purpose-built for AI native companies. As one team member said, “I joined Together AI because infrastructure companies define eras. Look at AWS, look at Stripe - the platforms that enable builders end up shaping entire industries. Together is doing that for AI: world-class models, infrastructure that scales, and tools that let developers actually ship.” As we look ahead, we remain committed to pushing the boundaries of innovation, and appreciate the continued support from our community of employees, partners, and customers. #LinkedInTopStartup
Imagine your LLM inference automatically getting faster in production (by up to 400%!) 🆕Enter: ATLAS–a not so traditional speculator that adapts to your workload as it evolves. The more you use it, the better it performs. The results speak for themselves: ⚪500 TPS on DeepSeek-V3.1 ⚪Up to 4x faster inference vs. baseline ⚪Up to to nearly 2x faster than our Turbo speculator Thanks to our Turbo research team, ATLAS offers a fundamental reassessment of the way inference platforms are designed to work and evolve. Swipe for more ➡️➡️➡️
Bay Area Builders, are you free Oct. 16th? 👀 We’re kicking down the doors and revealing the latest on LLMs during two can’t miss talks. Speakers Zain Hasan and Federico Bianchi cover the basics of building LLM-powered agents and offer a few tips and tricks to help you pass the finish line. Building, researching or just curious about the next wave of AI? This is your chance. 🧠Join us for an exclusive deep dive into the cutting edge of LLMs and autonomous agents.
AI builder & teacher | AI/ML @ Together AI | ℕΨ Engineering @ UofT | Lecturer | ex-Vector DBs, Data Scientist, Health Tech Founder
We’re hosting a special deep-dive into AI Agents for AI builders, researchers, and anyone obsessed with the next frontier of intelligent systems. Join us for an in-depth session on LLM powered AI agents, not just theory, but the real mechanics behind how these systems reason, plan, and act in the world. You’ll hear from the Together AI team including James Zou, Federico Bianchi and myself, who will break down: 🧩 The Anatomy of an LLM Agent, understanding how agents translate natural language into action through memory, tool use, and goal pursuit. ⚙️ Building AI Agents from Scratch, designing autonomous systems that reason, reflect, and improve across complex workflows. What to expect: 🔹 Practical frameworks for building and scaling LLM-powered agents 🔹 A chance to connect with developers and researchers pushing AI forward 🔹 Insights from the Together AI Frontier Agents team Date: Oct 16th | Time: 5–8 PM Food, drinks, and great conversations included!
What a morning at #SFTechWeek! 🏁 Huge thanks to everyone who joined our AI Builders & Innovators Brunch with Runware and Kling AI. It was great connecting in person and diving into how AI-native companies are shaping the future. Our VP, Customer Success Sarung Tripathi shared insights on how: ☁️ Together is built as the AI-native cloud for multimodal workloads 🧠 Companies like Cursor use Together to solve some of AI’s biggest challenges ⚒️ Teams can train, fine-tune, and run inference all on Together’s AI-native cloud The energy and conversations in the room were incredible — thanks for being part of it! If you couldn’t make it, keep an eye out for where we’ll be next: https://bit.ly/3J3UXOw
We're excited to host Apriel-1.5-15b-Thinker from ServiceNow's SLAM Lab on Together AI. 🚀 This 15B multimodal reasoning model delivers performance competitive with models 10x its size, including DeepSeek-R1 and Gemini-2.5-Flash. Key highlights: 🗣️ Scores 68 on Tau2 Bench telecom ⬆️ 62 on IFBench 🧮 87 on AIME 2025 ⚡ 131K context window 👁️ Text and image reasoning 💪 Single GPU deployment Srinivas Sunkara, Sathwik Tejaswi Madhusudan, and ServiceNow AI Research built Apriel-1.5-15b-Thinker through innovative mid-training techniques without reinforcement learning. This demonstrates that thoughtful training methodology can deliver frontier-level performance without massive computational resources. Developers can access Apriel-1.5-15b-Thinker on Together AI's highly scalable, reliable, and cost-efficient platform. Access it through the link in the comments 👇
Exciting to see SemiAnalysis launch the new InferenceMAX benchmark—and congratulations to NVIDIA on the outstanding results achieved with the NVIDIA Blackwell Platform. At Together AI, our customers are already seeing exceptional performance running large-scale inference and training workloads on NVIDIA Blackwell GPUs. Our solutions architects work closely with AI native companies to help them unlock even more inference performance from their hardware through applied research and optimized kernels — balancing throughput and interactivity to maximize token generation, while maintaining an outstanding user experience. Contact us to request a GPU cluster accelerated by NVIDIA Blackwell platform https://bit.ly/3IRAg8D
📣 NVIDIA Blackwell sets the standard for AI inference on SemiAnalysis InferenceMAX. Our most recent results on the independent benchmarks show NVIDIA’s Blackwell Platform leads AI factory ROI—see how NVIDIA Blackwell GB200 NVL72 can yield $75 million in token revenue over three years for DeepSeek R1. Learn more: https://nvda.ws/43aEpv2
ATLAS delivers up to 400% faster inference by learning in real-time. 🚀 From today's coverage on VentureBeat: "The shift from static to adaptive optimization represents a fundamental rethinking of how inference platforms should work." We couldn't agree more! "The software and algorithmic improvement is able to close the gap with really specialized hardware," says Tri Dao, Founder and Chief Scientist at Together AI. "We were seeing 500 tokens per second on these huge models that are even faster than some of the customized chips." Read more (link in comments 👇)