Discover the new learning path from NVIDIA Training on deep learning. Gain hands-on experience with the latest tools and frameworks to build, optimize, and deploy real-world AI solutions. ➡️ https://nvda.ws/4o6neDl Advance your career and drive impactful projects with practical deep learning skills.
About us
Explore the latest breakthroughs made possible with AI. From deep learning model training and large-scale inference to enhancing operational efficiencies and customer experience, discover how AI is driving innovation and redefining the way organizations operate across industries.
- Website
-
http://nvda.ws/2nfcPK3
External link for NVIDIA AI
- Industry
- Computer Hardware Manufacturing
- Company size
- 10,001+ employees
- Headquarters
- Santa Clara, CA
Updates
-
🎓 How are college students and educators embracing AI to unlock new ways of learning? On this week’s NVIDIA AI Podcast, Dr. Seher Awan, Ed.D., MBA, MPA, President of Mission College in Silicon Valley, shares how AI is helping educators: 🧠Streamline student support with agentic AI 📚Create accessible AI learning opportunities for all students 🤝Prepare the workforce of tomorrow through strategic partnerships 🎧 Listen now: https://nvda.ws/4hKAIlP
-
🎉 Congrats Alán Aspuru-Guzik on this prestigious recognition.
Congratulations to Alán Aspuru-Guzik (AC director, #UofT professor, and NVIDIA senior director), who has been selected as a 2025 #AI2050 Senior Fellow by Schmidt Sciences. The prestigious fellowship honours researchers advancing responsible innovation and the development of AI that benefits humanity. This year’s cohort includes 28 researchers from eight countries who will receive more than $18 million to advance their research in beneficial AI. Learn more: https://lnkd.in/gNntpSFU Department of Computer Science, University of Toronto, Department of Chemistry at the University of Toronto, University of Toronto Faculty of Arts & Science, Vector Institute, CIFAR.
-
-
ML/AI research engineer. Author of Build a Large Language Model From Scratch (amzn.to/4fqvn0D) and Ahead of AI (magazine.sebastianraschka.com), on how LLMs work and the latest developments in the field.
The DGX Spark for local LLM inference and fine-tuning has been a popular topic lately. As I got back from the PyTorch conference, I finally had a chance to collect some first impressions, numbers, and takeaways. Most people use it for running local LLMs through tools like Ollama, which is something I already did on my Mac Mini. The DGX feels similar in that setup, but with its 128 GB of VRAM, it can handle larger models beyond the gpt-oss-20B I usually run. Both the DGX Spark and Mac Mini M4 Pro reach around 45 tok/sec in Ollama with mxfp4 precision here for the gpt-oss-20B. What interests me more is using it as a local development machine for PyTorch projects. Below are a few results comparing it to my Mac Mini, H100 (6 times as expensive), and A100 (2.5 times as expensive) setups. Btw If you want to get the best performance out of the Mac, you probably want to use Apple's MLX library; but I am an PyTorch user after all. 1) Inference with a 0.6B model I ran a 0.6B model that I implemented in pure PyTorch with an optional KV-cache. The DGX Spark easily outperformed the Mac Mini. I could not run compiled versions on the Mac GPU due to MPS issues, which still limit PyTorch performance and usability there. 2) MATH-500 base vs reasoning model For 500 sequential math prompts, the DGX Spark even outperformed the 6 times more expensive H100. I skipped this test on the Mac Mini since I was worried about running it at 100C for >3 hours. The DGX gets hot too, but I assume it's built for workloads like this. The DGX was also kindly given to me by NVIDIA and is not my main machine with important data on it. 2b) Batched inference (batch size 128) With batching, the H100 clearly pulls ahead, likely due to the DGX's memory-bandwidth limits. The DGX is great for single-sequence generation but not for large batch runs. 3) Training and fine-tuning (355M model) During short pre-training and SFT/DPO fine-tuning runs, the DGX and A100 were both much faster than my Mac Mini, with the A100 maintaining a lead. Conclusion Overall, it is a small, quiet machine that fits nicely next to my Mac Mini. It is far more pleasant to keep in an office than the loud Lambda workstation I once had to move into a server room at UW Madison. I also like that I can connect directly to the DGX from my Mac without setting up SSH tunneling or connecting a separate monitor (and mouse and keyboard). Sure, the DGX Spark cannot match a 6 times more expensive H100, but I think it's really nice for local prototyping and development before larger training runs. It effectively replaces my Mac Mini for heavier experiments where I previously had heat concerns and provides full CUDA support, which is a big plus to me in PyTorch since MPS on Mac still feels experimental. In short, as long as you’re not expecting miracles, like replacing A100 or H100 cloud GPUs for full-scale LLM training, it’s a great option for inference and small fine-tuning runs at home.
-
-
We’re excited to sponsor Anyscale's #RaySummit ⭐ Today, Dr. Jim Fan took part in the keynote speech, and we connected with so many talented AI/ML builders to share hands-on insights. Explore how the NVIDIA Developer Program is driving open-source innovation and powering the future of AI ➡️ https://nvda.ws/47ngN9g
-
-
NVIDIA AI reposted this
NVIDIA CEO Jensen Huang has been awarded the Professor Stephen Hawking Fellowship by the Cambridge Union Society and the Hawking family—recognizing his work in advancing science and inspiring the next generation of technologists and researchers. 🔗 https://nvda.ws/43bYNfv
-
-
🎓 Ready to earn your NVIDIA Generative AI Certification? Watch the on‑demand webinar to explore exam prep strategies, sample questions, and expert insights to help you ace the new professional‑level Agentic AI and Generative AI LLM certifications. ➡️ Watch now: https://nvda.ws/4nJK7M1
-
-
The Kempner Institute at Harvard University is using NVIDIA DGX Spark to unravel the genetic mysteries behind epilepsy—right at the lab bench. 🔬 Discover how AI-powered, real-time analysis is accelerating neuroscience research and driving breakthroughs in brain health 👉 https://nvda.ws/47Gjr8Q
-
-
NVIDIA AI reposted this
We've had an incredible couple of days at #NVIDIAGTC in Washington D.C., our President and CEO, Justin Hotard, shares more on how we're shifting from connecting people to connecting intelligence through our new partnership with NVIDIA. Read more: https://nokia.ly/3WwAxRv #Nokia
-
Join NVIDIA Nemotron experts live as we dive into what’s new for developers following GTC DC. We’ll explore how the latest advancements in agentic AI are making it easier to build intelligent, multimodal systems that see, reason, and act with greater precision and safety. In this livestream, we’ll: - Unpack the Nemotron open models, open datasets and techniques that help you create domain-specialized AI agents ready for production. - Discover how advanced reasoning, multimodal perception, and retrieval systems can drive the next wave of AI innovation. We’d love to hear your thoughts and experiences as you experiment with the Nemotron family of models—your feedback helps us improve performance, usability, and developer workflows across the ecosystem. Tune in for a live walk-through and discussion designed to help developers push the boundaries of what AI agents can achieve—efficiently, safely, and at scale.
Build Specialized AI Agents: Post-GTC Developer Deep Dive
www.linkedin.com