LinkedIn and 3rd parties use essential and non-essential cookies to provide, secure, analyze and improve our Services, and to show you relevant ads (including professional and job ads) on and off LinkedIn. Learn more in our Cookie Policy.
Select Accept to consent or Reject to decline non-essential cookies for this use. You can update your choices at any time in your settings.
🌎 The future of robotics is Physical AI and we’re thrilled to be at the forefront.
RobCo has been named to the first-ever Physical AI Fellowship, developed by Amazon Web Services (AWS), NVIDIA and MassRobotics. This program gives us direct access to cloud + compute, advanced simulation platforms, and a global robotics ecosystem to help us bring intelligent machines into the real world.
This is a launchpad for the next wave of AI-powered robotics - more to come soon!
https://lnkd.in/eDnPDK-x
The Physical AI Fellowship will accelerate our effort to build our next-generation AI-native robot.
Thanks To AWS, NVidia and MassRobotics for making it happen!
Congrats RobCo, now start building a world where software and hardware in unison build together — a true Physical AI future where intelligence moves seamlessly from code to machines.
Whoa! NVIDIA AI just unlocked spatial memory for robots with their release of mindmap: Spatial Memory in Deep Feature Maps for 3D Action Policies.
Here is why this is interesting...
Unlike Roomba that just 2D maps or creates a lightweight point cloud of an environment, mindmap actually creates a dense point cloud of the environment AND semantically labels what the features around it are. The robot will still be aware of surroundings that it is currently not looking at.
For example, in this image it is semantically labeling the workbench, the drill, and the storage boxes. This will improve how efficient a robot interacts within its environment.
NVIDIA is really coming out swinging at CoRL with so many announcements. Even 2 new models for Cosmos. What was your favorite announcement?
#physicalai#robotics#computervision
Huge opportunity! $100,000 in GPU credits!
Applications for the Robotics & Physical AI Awards are now open!
You can apply in any of the following categories:
1. Foundation models, robot brains, and runtime
2. Data engines, synthetic data, and simulation
3. Industrial robotics deployment and transformation
4. Vision AI and streaming video analytics
5. Platforms for benchmarking, visualization, and evaluation
The grand prize winners will get $100,000 in high-performance AI compute on the Nebius AI Cloud, accelerated by NVIDIA AI infrastructure.
You can find more information here: https://lnkd.in/e_kvVCA4
At WOW Summit, we showcased a breakthrough in robotics and spatial computing:
Our Unitree G1 humanoid, Terri, autonomously navigated the venue by connecting to an Auki Domain on the real world web.
A step toward scalable, collaborative AI in the physical world.
🤖 NVIDIA released Jetson Thor, a new edge-compute platform built to unlock real-time reasoning for physical AI.
What does this mean in practice? Robots will now be able to process multi-sensor data at lightning speed - making split-second decisions in dynamic, real-world environments. From humanoid robots like Agility Robotics’ Digit and Boston Dynamics’ Atlas, to surgical assistants, smart tractors 🚜, and even search-and-rescue bots 🚑 - Jetson Thor provides the kind of computing power that was previously reserved for server rooms, now available directly on-device.
Read more: https://lnkd.in/dxYvqN2s#Robotics#AI#Computing#PhysicalAI#Innovation
VIDEO: https://lnkd.in/dXjHp6BG
AI/ML & GenAI Leader | AI Data Center SME for Google Cloud, Azure, AWS | Data Science | DevOps | Cybersecurity | Robotics | Embedded Systems | NLP & Computer Vision | ADAS, SDV
From Weekend Flying to Cloud-Scale Thinking — Fixed-Wing Autonomy Meets GPU Simulation
Last weekend’s “fun at the RC airfield” turned into something much bigger — a live experiment in AI-driven autonomy and GPU cloud simulation.
I built and flight-tested a fixed-wing autonomous drone that flies without GPS, powered entirely by Visual SLAM and onboard AI inference running on a Jetson Orin. The aircraft perceives, maps, and self-corrects in real time — no satellite, no remote controller (I still have one just in case!) , just edge compute and algorithms doing their thing.
But here’s the twist — all of the AI models, mapping logic, and flight profiles were trained, simulated, and stress-tested on Azure NC-series virtual GPUs (NVIDIA A10 v5) before deployment (If it was not a personal project I could do that on Parry's fully managed virtual work bench with GPU! which handles most of the provisioning automatically). That means I could run Isaac Sim and SLAM pipelines in a cloud GPU environment, validate flight scenarios, and push containerized models back down to the Jetson — a complete edge-to-cloud feedback loop from my home lab.
💡 Weekend Project Highlights:
🧠 Jetson Orin + TensorRT for onboard inference
☁️ Azure vGPU (A10 v5) for AI model training + simulation
🧭 Visual SLAM & AI perception stack (ROS 2 + Isaac SDK)
🚀 Fixed-wing platform tuned for GPS-denied flight autonomy
🔄 Real-time synchronization between cloud and physical flight tests
This is kind of R&D — where edge devices meet GPU clouds, and a “weekend build” doubles as a live demo of scalable AI architecture.
Projects like this remind me why I love working at the intersection of AI infrastructure, GPU systems, and autonomy — Thank you NVIDIA for building Jetson family of edge AI, Now we can build at global scale.
#AIInfrastructure#EdgeAI#JetsonOrin#VisualSLAM#VirtualGPU#Azure#IsaacSim#NVIDIA#AutonomousSystems#WeekendEngineering#GPUComputing#MLOps#CloudArchitecture#PassionProject
1.1 Gb -> 172 Mb 🔥
Davide Faconti's Cloudini was released to the Foxglove extension registry, which means you can install it in one click now.
Attached is the EuRoC MAV dataset, where compressing the bag using Cloudini shrunk the point cloud size from 1.1 Gb to 172 Mb(!!). All replaying in Foxglove now without any noticeable hit in performance.
If your workflows are PointCloud-heavy, you should start looking into this; links in the comments.
#robotics#visualization#sensors
No GPU? No Problem.
I wanted to explore robotics simulation but didn’t have a powerful GPU. Instead of giving up, and after some research. I launched an AWS cloud instance, installed Isaac Sim, integrated it with ROS2, and got a working digital twin — all running remotely.
It’s amazing how far cloud tools have come. You can now test, learn, and prototype robotics systems from anywhere without needing expensive hardware.
The STL model and Stage file were created by Nikodem Bartnik on his Youtube Channel: https://lnkd.in/eN76yEaX — huge thanks for sharing great resources with the community.
Next step is Printing the 3D parts and have a Jetson Nano Running ROS2 to control it.
Always learning, always building.
#Robotics#IsaacSim#ROS2#AWS#DigitalTwin#Simulation#CloudComputing#Engineering#KeepLearning#Nvidia
Music: Light
Musician: Jeff Kaale
Principal Engineer, RobCo. AI & Robots. We automate the ordinary, so humans can do the extraordinary.
1moThe Physical AI Fellowship will accelerate our effort to build our next-generation AI-native robot. Thanks To AWS, NVidia and MassRobotics for making it happen!