CES 2025: NVIDIA Reveals New Generative AI Models for Omniverse
Company aims to advance AI applications in robotics, autonomous vehicles, and digital twins
Engineering Resource Center News
Engineering Resource Center Resources
January 20, 2025
At CES 2025 in Las Vegas, NVIDIA CEO Jensen Huang kicked off the conference, appearing alongside his digital avatar. “In the future, these AI agents are essentially digital workforces that will be working alongside your employees,” he predicted.
If you don’t know how to create an AI-powered agent or where to start, NVIDIA has some foundational technology pieces: NVIDIA Agentic Building Blocks, pretrained on NIMs (NVIDIA Inferencing Microservices), are a good starting point.
In design and simulation software, such agents could become a new way to deliver tech support or tutorials, as seen in AnsysGPT and other natural language-capable chatbots.
Get Ready for Physical AI
In Huang’s vision, AI’s performance is inseparably linked to its ability to digest and interpret real-world data, representing the physical world it interacts with. He said, “The next frontier of AI is physical AI. Model performance is directly related to data availability, but data of the real world is difficult to capture and label. NVIDIA Cosmos is the platform for developing physical AI.”
Cosmos is a platform with state-of-the-art generative world foundation models (WFMs), advanced tokenizers, guardrails, and an accelerated video processing pipeline, NVIDIA revealed.
The company explained, “Cosmos WFMs are purpose-built for physical AI research and development, and can generate physics-based videos from a combination of inputs, like text, image and video, as well as robot sensor or motion data. The models are built for physically based interactions, object permanence, and high-quality generation of simulated industrial environments — like warehouses or factories — and of driving environments, including various road conditions.”
These foundational models are physics-aware, making them able to mimic real-world objects’ behaviors in the physical world. Physical AI development will likely accelerate digital twin development and adoption in manufacturing, where owners and operators seek to understand the implications of different industrial machinery and equipment configurations and layouts.
“Physical AI will revolutionize the $50 trillion manufacturing and logistics industries. Everything that moves — from cars and trucks to factories and warehouses — will be robotic and embodied by AI,” said Huang. “NVIDIA’s Omniverse digital twin operating system and Cosmos physical AI serve as the foundational libraries for digitalizing the world’s physical industries.”
Generative AI Models for Omniverse
NVIDIA expects the new generative AI models will speed up world building, labeling the world with physical attributes, and making it photorealistic – tasks that used to be manual, tedious, and time-consuming. This capacity could make Omniverse an engine to generate realistic training data on demand.
“NVIDIA Omniverse, paired with new NVIDIA Cosmos world foundation models, creates a synthetic data-multiplication engine — letting developers easily generate massive amounts of controllable, photoreal synthetic data. Developers can compose 3D scenarios in Omniverse and render images or videos as outputs. These can then be used with text prompts to condition Cosmos models to generate countless synthetic virtual environments for physical AI training,” explained NVIDIA.
During his CES keynote, Huang also announced four new blueprints, or reference workflows, aimed at the growing digital twins market:
- Mega, powered by Omniverse Sensor RTX APIs, for developing and testing robot fleets at scale;
- Autonomous vehicle simulation, powered by Omniverse Sensor RTX APIs, for autonomous car developers to replay driving data, generate new ground-truth data, and perform closed-loop testing;
- Omniverse spatial streaming to Apple Vision Pro for immersive digital visualization;
- Real-time digital twins for Computer Aided Engineering (CAE), a reference workflow built on NVIDIA CUDA-X acceleration, physics AI, and Omniverse libraries.
According to NVIDIA's announcement, Accenture, Altair, Ansys, Cadence, Foretellix, Microsoft, and Neural Concept are the early integrators of Omniverse. Siemens, a leader in industrial automation, also announced at the CES the availability of Teamcenter Digital Reality Viewer — the first Siemens Xcelerator application powered by NVIDIA Omniverse libraries.
The AI models are also expected to help autonomous vehicle development. With Omniverse and Cosmos, NVIDIA’s AI data factory can scale hundreds of drives into billions of effective miles, Huang said. This lets car developers easily multiply the existing datasets to generate the volume required to train and test their vehicles.
Huang revealed that Toyota will be building its next-generation vehicles on the NVIDIA DRIVE AGX software and hardware platform including Drive AGX Orin SOC (system-on-a-chip), running the safety-certified NVIDIA DriveOS operating system.
A Grace Blackwell on Every Desk
Huang also revealed Project DIGITS, his ambitious goal to put a personal AI supercomputer on every desk. Huang wants to give researchers, data scientists, and students worldwide access to the power of NVIDIA Grace Blackwell superchips..
“With Project DIGITS, the Grace Blackwell Superchip comes to millions of developers,” said Huang. “Placing an AI supercomputer on the desks of every data scientist, AI researcher and student empowers them to engage and shape the age of AI.”
According to NVIDIA, the GB10 Superchip includes the NVIDIA Blackwell GPU with latest-generation CUDA cores and fifth-generation Tensor Cores, connected via NVLink chip-to-chip interconnect to a high-performance NVIDIA Grace CPU.
(NVIDIA also announced new Blackwell-based GeForce RTX 50 Series Desktop and Laptop GPUs.)
You can read more about NVIDIA at CES 2025 in this blog.