
NVIDIA CEO Jensen Huang delivers the keynote address at GTC 2025 at the SAP center in San Jose, California. Image courtesy of NVIDIA.
March 24, 2025
At NVIDIA GTC in San Jose, CA, the preshow talk onsite featured notable faces, including Pat Gelsinger, former Intel CEO, and Michael Dell, founder and CEO of Dell, paying homage to NVIDIA for paving the way in graphics, and forging a new path in AI.
The keynote took place at the SAP Center, a sports and concert venue that seats 17,000. It's 14 mins' drive from the Denny's on Berryessa Road where the idea for the graphics company that would become NVIDIA was first hatched. In 24 years since its launch, the company has evolved from a graphics hardware maker into an AI powerhouse. This year, Huang discussed the rise of Agentic AI in his keynote.
“It all started with computer vision, or perception AI, then generative AI. For the last five years we focused primarily on generative AI ... generative AI fundamentally changed how computing is done, from a retrieval computing model to a generative model. In the past, it was about creating something in advance to retrieve it on demand. Now, AI understands the context, the request, and if necessary, it gets the information, and generate what it knows.”
But Huang believes the industry is ready to move to the next two phases: Agentic AI, and physical AI. “The ability to understand its surrounding, is going to lead to a new era—what we call physical AI, and it's going to enable robotics. Each of these waves opens up new opportunities for us,” he said.
There are plenty of evidence of the future Huang envisioned on the show floor, in the form of semi- or fully autonomous robots on the show floor.
Closing out the keynote, Huang appeared alongside Blue, a small robot, to announce, “Groot N1 is now open source.” NVIDIA Isaac Groot N1 is the company's foundation model for humanoid robots. The outcome of a collaboration among NVIDIA, Google DeepMind, and Disney Research, the model is described as “fully customizable foundation model for generalized humanoid reasoning and skills.”
Blackwell Ultra on the Horizon
During his keynote, Huang revealed Blackwell GPUs are in full production. The RTX Pro GPUs under the Blackwell architecture are designed for professional users. Huang also teased the audience with the next lineup in the roadmap: Blackwell Ultra GPUs.
“Blackwell Ultra includes the NVIDIA GB300 NVL72 rack-scale solution and the NVIDIA HGX B300 NVL16 system. The GB300 NVL72 delivers 1.5x more AI performance than the NVIDIA GB200 NVL72, as well as increases Blackwell’s revenue opportunity by 50x for AI factories, compared with those built with NVIDIA Hopper,” said NVIDIA.
The company suggested Blackwell Ultra GPUs are ideal for Agentic AI and Physical AI, because they feature “sophisticated reasoning and iterative planning to autonomously solve complex, multistep problems. AI agent systems go beyond instruction-following. They can reason, plan and take actions to achieve specific goals,” and they enable “companies to generate synthetic, photorealistic videos in real time for the training of applications such as robots and autonomous vehicles at scale.”
The RTX PRO 6000 GPUs, meant for professional users, “will double the memory, from 48GB to 96GB,” said Himanshu Iyer, Manufacturing Industry Manager, NVIDIA. “
The desktop and laptop version of Blackwell GPUs have their own built-in cooling mechanisms, whereas the HPC- and server-targeted Blackwell 6000 will be “passively cooled,” said Himanshu.
DGX Spark and DGX Station
At the show, NVIDIA also introduced various editions of NVIDIA DGX, a personal AI supercomputer. DGX Spark was formerly called Project DIGITS, and meant to be a small form-factor personal supercomputer for AI workloads. DGX Station is a new high-performance desktop supercomputer, to be powered by the upcoming Blackwell Ultra GPUs.
“DGX Stations are powered by ARM-based CPUs and the upcoming Blackwell Ultra GPUs, with a combined memory of 784GB. So it can accelerate both your CPU-based workflows as well as the GPU-based ones,” explained Himanshu. “That will enable creators, developers, and engineers to work with much larger models AI models locally.”
Since DGX Stations run on NVIDIA GB300 superchip, the chip that combines the Grace CPU and Blackwell Ultra GPU, they also accelerate processing by reducing the data transfer between CPU and GPU through interconnects.
AI-Powered Copilot
Softserv, an exhibitor at the show, demonstrated its SoftServe QA Agent, an agentic AI to speed up quality and assurance (QA) processes. “The SoftServe QA Agent will boost developers’ productivity by automating repetitive code and testing tasks. It was built with a custom reasoning model to transform manual test creation, execution, and validation for dramatically reduced overhead, enhanced inference, and increased coverage,” the company announced.”
Softserv used NVIDIA Omniverse to train its copilot. “NVIDIA Omniverse is the visualization engine behind it,” said Himanshu.
Blackwell to Accelerate CAE
During the show, NVIDIA announced that around 18 leading CAE (Computer Aided Engineering) software vendors, including Ansys, Altair, Cadence, Siemens and Synopsys, are adding GPU-based acceleration using NVIDIA's Blackwell products.
“CUDA-accelerated physical simulation on NVIDIA Blackwell has enhanced real-time digital twins and is reimagining the entire engineering process,” said Huang. “The day is coming when virtually all products will be created and brought to life as a digital twin long before it is realized physically.”
For demonstration, Cadence used NVIDIA Grace Blackwell-accelerated systems to simulate an entire aircraft's takeoff and landing operations. “Using the Cadence Fidelity CFD solver, Cadence successfully ran multibillion-cell simulations on a single NVIDIA GB200 NVL72 server in under 24 hours, which would have previously required a CPU cluster with hundreds of thousands of cores and several days to complete,” NVIDIA pointed out.
Tim Costa, senior director of CAE, NVIDIA, said, “That means, if you're a CAE software user and you have access to a Blackwell GPU, whether in your local machine or through the cloud, you would be able to reap the benefit of these accelerations.”
On-demand CAE infrastructure provider Rescale also launched CAE Hub, designed to let users acquire and use NVIDIA GPU-accelerated CAE packages. According to NVIDIA, “Boom Supersonic, the company building the world’s fastest airliner, will use the NVIDIA Omniverse Blueprint for real-time digital twins and Blackwell-accelerated CFD solvers on Rescale CAE Hub to design and optimize its new supersonic passenger jet.”
Lenovo Hybrid AI Advantage
Hardware maker Lenovo unveiled new Lenovo Hybrid AI Advantage product line, featuring NVIDIA GPUs. The new Lenovo lineup will support the latest NVIDIA Blackwell Ultra platform, NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. According to Lenovo, it is “designed to accelerate AI adoption and boost business productivity by fast-tracking agentic AI that can reason, plan, and take action to reach goals faster. The validated, full-stack AI solutions enable enterprises to quickly build and deploy AI agents for a broad range of high-demand use cases, increasing productivity, agility and trust while accelerating the next wave of AI reasoning for the new era of agentic AI.
At the conference, LENOVO debuted the Lenovo AI Knowledge Assistant, a virtual human assistant that engaged in real-time conversation with attendees.
Ansys Partnership
During the conference, simulation software maker Ansys and NVIDIA announced plans to advance Physical AI and robotics as the next generation of AI technology. Ansys wrote, “PyAnsys is a collection of open-source Python libraries that bridge Ansys tools and the Python scripting language, making it easier to run simulations, modify geometries, and process results automatically. NVIDIA NIM— a set of inference microservices for developers to easily deploy AI models— enables Ansys users to connect with large language models (LLMs), in this case via a chatbot.”
At the show, Ansys demonstrated the framework to offer tailor-made treatments and outcome predictions for those with cardiovascular disease. From within the PyAnsys-Heart library, a clinician can ask the chatbot, “What does my patient’s heart look like?” PyAnsys-Heart is expected to generate the code for the patient's heart, enabling a partial or full anatomical simulation model in LS-DYNA and a full visualization in Omniverse-powered application.
The two companies also collaborated with carmaker Volvo to accelerate fluid flow simulation. According to the announcement, they were able to “reduced external aerodynamic simulation run times from 24 hours to 6.5, using just eight NVIDIA Blackwell GPUs.”
Luminary Cloud and nTop
At the show, Luminary Cloud and nTop announced a new integration with NVIDIA PhysicsNeMo to reduce physics-based AI design optimization. The companies stated the new method reduces processing time from weeks or months to mere hours. “By seamlessly connecting nTop's parametric geometry generation, Luminary's GPU-native simulation, and simulation management platform, and NVIDIA's PhysicsNeMo via APIs, engineers can now create and analyze hundreds of design variations in a single day—a process that previously took weeks to months of manual effort across disconnected systems,” said Luminary Cloud.
“The use of cloud-native platforms and modern APIs from nTop and Luminary enable the generation of ensembles of simulations and vast amounts of data that are easy to curate, store, and consume for physics AI model training in less than a day,” said Juan J. Alonso, CTO and cofounder of Luminary Cloud. “Without the ability to seamlessly manage the data we rely on, even the most sophisticated companies today are unable to deploy Physics AI models as quickly as required.”
Luminary Cloud is offering free trials at luminarycloud.com
More NVIDIA Coverage
Subscribe to our FREE magazine,
FREE email newsletters or both!About the Author

Kenneth Wong is Digital Engineering’s resident blogger and senior editor. Email him at kennethwong@digitaleng.news or share your thoughts on this article at digitaleng.news/facebook.
Follow DE


