The 7th Amplify Monthly, covering everything from how Temporal Technologies became Temporal to portfolio news and open technical roles you won't want to miss.
Amplify Partners
Venture Capital and Private Equity Principals
Menlo Park, California 6,853 followers
The first investor for technical founders.
About us
Amplify Partners is an early-stage investor for technical founders building the future of developer tools, AI and ML, data, and cybersecurity.
- Website
-
www.amplifypartners.com
External link for Amplify Partners
- Industry
- Venture Capital and Private Equity Principals
- Company size
- 11-50 employees
- Headquarters
- Menlo Park, California
- Type
- Partnership
- Founded
- 2012
- Specialties
- Venture Capital, Entrepreneurship, Cloud Computing, Data and Analytics, Dev Ops, Technical Founders, Startups, Seed Funding, Machine Learning, Information Security, Machine Intelligence, Cyber Security, Infrastructure, and Developer Tools
Locations
-
Primary
Get directions
800 Menlo Ave
Menlo Park, California 94025, US
-
Get directions
Employees at Amplify Partners
Updates
-
Amplify Partners reposted this
Some (many?, most?) AI tools and platforms seem to disregard or misunderstand what drives creative people to create things. Thanks to Sarah Catanzaro and Justin Gage for articulating that point in this excellent article. https://lnkd.in/gZUrfAhp
-
How Vanta deeply integrates domain experts into their evals process: https://lnkd.in/gmcjA6tJ
-
-
Amplify Partners reposted this
Dev work complete for Amplify Partners 📣 Amplify is a VC firm backing technical founders to scale titans like Datadog, Fastly, and Temporal Technologies. Building on a stunning identity from Fuzzco, we implemented their brand and design across the site. Key features included a dynamic portfolio experience showcasing their companies and founders, reimagined blog and writing sections, and a Startup Legal Hub, all realized in Amplify’s striking but minimalist new brand. On dev-only projects we often add the most value in the least obvious ways. Implementation matters: a great design means nothing if it doesn’t perform. Thanks to Justin Gage and the Amplify team for trusting us to work Webflow magic 🪄
-
Amplify Partners reposted this
How do you build AI *into* tech products? This week's article on Refactoring draws from dozens of interviews to founders and AI operators! Barr Yaron wrote a fantastic piece that puts together everything she learned on her podcast + her work with founders at Amplify Partners. Here are a few takeaways: ↳ 🎯 𝗗𝗼𝗺𝗮𝗶𝗻 𝗲𝘅𝗽𝗲𝗿𝘁𝗶𝘀𝗲 𝗶𝘀 𝘁𝗵𝗲 𝘀𝗲𝗰𝗿𝗲𝘁 𝘁𝗼 𝗴𝗿𝗲𝗮𝘁 𝗲𝘃𝗮𝗹𝘀 — the best AI products integrate subject matter experts directly into engineering workflows, not as occasional reviewers. ↳ 🔄 𝗥𝗲𝗮𝗹 𝘂𝘀𝗲𝗿𝘀 𝗮𝗿𝗲 𝘆𝗼𝘂𝗿 𝘂𝗹𝘁𝗶𝗺𝗮𝘁𝗲 𝘁𝗲𝘀𝘁 — ship early and learn from actual usage patterns, implicit feedback, and A/B tests; public benchmarks rarely capture how customers actually use your product. ↳ ⚖️ 𝗕𝘂𝗶𝗹𝗱 𝘃𝘀 𝗯𝘂𝘆 𝗶𝘀 𝗮𝗯𝗼𝘂𝘁 𝗰𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀, 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗺𝗼𝗱𝗲𝗹𝘀 — focus on understanding what today's models can do, what tomorrow's will likely solve, and where your unique advantage lives. ↳ 🧠 𝗛𝗶𝗿𝗲 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵𝗲𝗿𝘀 𝘄𝗵𝗼 𝗰𝗮𝗻 𝘀𝗵𝗶𝗽 — if you need research capabilities, look for people who blend algorithmic depth with engineering pragmatism and can adapt quickly. ↳ 🚀 𝗧𝗵𝗲 𝗳𝗮𝘀𝘁𝗲𝘀𝘁 𝗹𝗲𝗮𝗿𝗻𝗲𝗿𝘀 𝘄𝗶𝗻 — AI advantage comes from building organizations that adapt as quickly as the models themselves, with clear strategic bets about where lasting differentiation truly lives. I put the link in the comments!
After dozens of interviews for Barrchives, I wanted to know: what separates AI teams that ship from those that stall? Three themes came up again and again: 🔍 Getting evals right — by integrating domain expertise ⚖️ Build vs buy — when owning the model actually matters 🏗️ Team structure — do you need researchers, engineers, or both? Everyone talks about shipping AI products. These are the people actually doing it. I wrote more about what I learned as a guest on Refactoring — link in the comments.
-
-
Amplify Partners reposted this
After dozens of interviews for Barrchives, I wanted to know: what separates AI teams that ship from those that stall? Three themes came up again and again: 🔍 Getting evals right — by integrating domain expertise ⚖️ Build vs buy — when owning the model actually matters 🏗️ Team structure — do you need researchers, engineers, or both? Everyone talks about shipping AI products. These are the people actually doing it. I wrote more about what I learned as a guest on Refactoring — link in the comments.
-
-
The acclaimed Barr Yaron has interviewed 20+ leaders on how to successfully build AI into your products - nicely wrapped up into a few lessons, give it a read:
After dozens of interviews for Barrchives, I wanted to know: what separates AI teams that ship from those that stall? Three themes came up again and again: 🔍 Getting evals right — by integrating domain expertise ⚖️ Build vs buy — when owning the model actually matters 🏗️ Team structure — do you need researchers, engineers, or both? Everyone talks about shipping AI products. These are the people actually doing it. I wrote more about what I learned as a guest on Refactoring — link in the comments.
-
-
Amplify Partners reposted this
Today we’re introducing Tahoe-x1 (Tx1), a 3 billion parameter single-cell foundation model that learns unified representations of genes, cells, and drugs, open-sourced on Hugging Face. The same 15-person team (10 until a couple of weeks ago) that built the Tahoe-100M dataset to address the data challenge in scaling AI models in cell biology, has now built Tx1, the largest and first compute-efficient model at this scale trained on perturbation-rich data. And true to the original spirit, we are releasing it open source with open weights (see comments). Built on our Tahoe-100M dataset, Tx1 is over 10× more efficient to train than most other cell-state models. We’re releasing Tx1 together with new benchmarks we designed to assess performance in cancer-relevant and drug-discovery tasks, where Tx1 achieves state-of-the-art results. Tx1 makes it possible, for the first time, to systematically search for better architectures at the billion-parameter scale and explore whether the scaling laws that transformed language and protein modeling can now do the same for cell biology.
-
An inside look into the insane PB scale data curation pipelines DatologyAI have built:
Deduplicate your petabyte-scale multimodal dataset in an air-gapped cluster using this one weird trick* (Researchers h̶a̶t̶e̶ love him.) *may require 6 or 7 weird tricks The folks at Amplify sat down with the engineering team at DatologyAI to talk about how we curate a non-trivial fraction of the internet for training foundation models and you should read it if that is the kind of thing you dine out on-- link is in the comments!
-
-
Amplify Partners reposted this
Vortex is the fastest open-source columnar file format, full stop, by every metric. Out of the box, Vortex is 5x faster to write, 20x faster to scan, and 100x faster random access than the best Parquet implementations while maintaining the same size; 2x faster than the MDS format for GPU data loading; faster than Lance for both random access and scans while being 5x smaller & faster to write. For Clickbench on NVMe, Vortex is even faster than duckdb's native format when queried from duckdb. And now, Vortex has experimental support for direct GPU on-device decompression, promising further dramatic speedups for AI & GPGPU workloads. Best of all, it's Apache-2.0 licensed, neutrally governed at the Linux Foundation (LF AI & Data Foundation ), and FOSS Forever. But what few people understand is how extensible & customizable Vortex is. Last week, I gave a talk to CMU Database Group's Future Data Systems seminar series on how Vortex is really the "LLVM of File Formats", obviating the need for custom or specialized file formats entirely. Check out the recording here: https://lnkd.in/eQzjbwfV
Vortex: LLVM for File Formats (Will Manning)
https://www.youtube.com/