Nvidia unleashes physical AI models and Jetson hardware as robot makers go all-in at CES
Models branded Nvidia Cosmos and Isaac GR00T are designed to let robots see, reason and act with far less bespoke coding, the AI GPU giant says.

Nvidia is using this year’s CES to argue that artificial intelligence is finally ready to leave the chat window and inhabit the physical world. The chipmaker has unveiled a stack of “physical AI” models, simulation tools and edge-computing hardware, flanked by a parade of humanoids, surgical systems and mining trucks from partners betting that robots are about to go mainstream.
At the heart of the announcement is a family of open models branded Nvidia Cosmos and Isaac GR00T, designed to let robots see, reason and act with far less bespoke coding. Cosmos Transfer 2.5 provides customizable world models for synthetic data generation and policy evaluation in simulation, while Cosmos Predict 2.5 and Cosmos Reason 2 target vision-language and vision‑language‑action tasks, especially for humanoids that must coordinate full‑body movement in messy real-world settings.
Nvidia says partners such as Franka Robotics, NEURA Robotics, Humanoid and Salesforce are already using GR00T-enabled workflows to train and validate new robot behaviours, from industrial manipulation to video search and summarisation.
The push reflects a broader scramble among technology firms and manufacturers to extend generative AI from tokens to torque, fusing large models with sensors and actuators in factories, homes and hospitals. Nvidia’s proposition is that a unified stack—spanning open foundation models, cloud-native orchestration and edge modules built on its Blackwell architecture—can lower the cost of developing “generalist-specialist” robots that learn many tasks, rather than the single‑purpose machines that dominate shop floors today.
With robotics now the fastest-growing category on Hugging Face, and with open frameworks like LeRobot integrating Nvidia’s Isaac and GR00T tools, the company is positioning itself as the default infrastructure provider for an emerging ecosystem of AI-powered machines.
“The ChatGPT moment for robotics is here,” declared Jensen Huang, Nvidia’s founder and chief executive. “Breakthroughs in physical AI — models that understand the real world, reason and plan actions — are unlocking entirely new applications,” he said, pitching Nvidia’s combination of Jetson processors, CUDA software, Omniverse simulators and open models as the catalyst for partners “to transform industries with AI-driven robotics.”
To anchor those ambitions in silicon, Nvidia has launched the Jetson T4000, a $1,999 module at 1,000‑unit volumes that brings its Blackwell GPU architecture to autonomous machines with up to 1,200 FP4 teraflops of AI compute and 64GB of memory in a 70‑watt envelope, promising roughly four times the performance of the previous generation.
Jetson Thor and IGX Thor, aimed at humanoids and industrial edge systems respectively, are being adopted by developers such as Boston Dynamics, NEURA Robotics, LG Electronics, Caterpillar and Archer to boost navigation, manipulation and safety‑critical autonomy.
On the software side, Nvidia is releasing Isaac Lab‑Arena, an open-source framework for large‑scale robot policy evaluation linking into benchmarks like Libero and Robocasa, and OSMO, a cloud‑native orchestration tool that coordinates synthetic data generation, model training and software‑in‑the‑loop testing across on‑premise and cloud hardware.
The company’s collaboration with Hugging Face will see GR00T N models and Isaac Lab‑Arena folded into the LeRobot library, with humanoids such as Reachy 2 and the tabletop Reachy Mini certified to run Nvidia’s VLAs and large language models on Jetson Thor and DGX Spark. From LEM Surgical’s Dynamis system, which uses Cosmos Transfer and Jetson AGX Thor for autonomous surgical arms, to Caterpillar’s mining and construction fleets, the message from Las Vegas is that for Nvidia, the next big AI market is not another chatbot—but the robots rolling, walking and flying out into the world.
