DE · Topics · Resources · Sponsored Content


Boost CAE Tool Performance with GPU Acceleration

At NAFEMS, attendees learned how GPU compute can enhance CAE tools on the Microsoft Azure cloud platform.

At NAFEMS, attendees learned how GPU compute can enhance CAE tools on the Microsoft Azure cloud platform.

Image courtesy of Microsoft.


At the recent NAFEMS World Congress in Tampa, NVIDIA gave an insider's look at how general-purpose computing on graphics processing units (GPUs) can enhance the performance of familiar industry CAE tools on the Microsoft Azure cloud platform. Ian Pegler, NVIDIA's head of business development for CAE, presented an insightful session titled “Accelerating CAE with NVIDIA GPUs on Microsoft Azure.”

Pegler says the 30,000-foot view of the talk is to “give an overview of the state of play when it comes to using GPUs to accelerate key CAE applications across FEA, CFD, discrete element method (DEM), electromagnetics and optical simulations. GPU acceleration is kind of new to a lot of people—using GPUs instead of CPUs.”

And running GPUs in the cloud for CAE applications has benefits of note from the get-go.

“Running in the cloud gives the flexibility to use GPUs today for those CAE codes that have GPU solving capabilities without the upfront cost of investing in hardware,” says Pegler. “The Azure platform also benefits from using the NVIDIA Quantum Infiniband Platform, which can improve performance when running larger cases.”

Pegler provided an exploratory look at the benefits of GPGPUs for factors such as turnaround time, power consumption and hardware costs, while also reviewing workload certifications on Azure. 

Regarding certifications, as NVIDIA has spent time working with the Azure team, Pegler says Microsoft Azure has certified many of the most widely used CAE tools.  

“The certification allows customers to clearly see the expected benefits of GPU acceleration for CAE,” he explains. The certification also includes cost information and details of the configuration that the benchmarks were run on. “Hopefully, all this information makes it easier for customers to get started.”

He addressed the technical details of GPU hardware available on Azure while highlighting some Ansys tools that have been optimized for NVIDIA RTX™ professional graphics cards to deliver exceptional performance, quality, and workflow efficiency. Two of the solvers highlighted in his talk were Ansys Speos for lighting simulation and CPFD’s Barracuda Virtual Reactor. 

Spotlighted Tools 

Though Pegler says any number of CAE tools could have been spotlighted for GPU acceleration capacity in his presentation, NVIDIA chose two with some unique features. 

“There are over 100 different CAE tools that benefit from GPU acceleration, so it was difficult to cover them all,” Pegler says. “We chose Ansys Speos, as running on one NVIDIA RTX™ 6000 Ada Generation card has sped up simulations by 2x to 3x at the same power level compared to the previous generation. We also wanted to show that workstation users can benefit from GPU acceleration as much as those running on large HPC [high-performance computing] systems.”

CPFD Barracuda is also an interesting pick, according to Pegler, as it demonstrates the benefits being experienced by Encina running GPU accelerated simulations on Azure.

During the talk, Pegler also highlighted the Rescale platform on Azure for benchmarking. 

“To make it easy for customers to experience GPU acceleration we have partnered with Rescale to offer a trial program,” Pegler notes. Customers can sign up for these trials here.

Attendees at the NAFEMS World Congress in May learned about the value of GPU acceleration for design and simulation. Image courtesy of NAFEMs.

Managing Power Consumption

Running massive CAE simulations can present significant challenges. Power consumption and sustainability will be the biggest issues facing large scale CAE workloads going forward, Pegler cautions. “There is an increased demand for CAE as customers want to move to a virtual design process and build digital twins at various scales and fidelity levels. This requires more CAE and hence more HPC resources. However, this has become limited by data center capacity (power and space).  

“GPU acceleration offers a lot of benefits in terms of speed and using less power to do these computations. That’s a key challenge for a lot of people now—limitations within the data center when it comes to power consumption and also just general sustainability. People want to use less CO2 or they’ve got CO2 targets in their data centers so we want to do more—but do more efficiently,” Pegler explains. 

For customers seeking to reduce their CO2 impact, “using accelerated computing can help address these limitations, allowing customers to still access the computational power needed to make their business competitive in areas such as CAE and AI,” Pegler adds. 

With NVIDIA GraceHopper™ systems it’s possible to also accelerate more CPU-centric CAE codes like LS-DYNA for crash simulations, offering tangible power savings, according to Pegler. 

Pegler notes that “nearly everybody on the hardware side realizes that the power is going to become a real issue. Everybody is talking about the need to do these HPC workflows—but more efficiently.”

With NVIDIA’s GPU technology capabilities, Pegler adds, “We feel we’re in prime position to make that happen just because the GPU inherently is very power efficient.”

The only limitations right now are just making sure all the key tools support GPU acceleration, according to Pegler, who notes that NVIDIA is already working closely with its independent software vendor partners such as Altair, Ansys, Cadence, Dassault Systemes and Siemens, to address challenges that may crop up. “We have a team that acts to connect the technical experts within NVIDIA with the ISVs to make sure they understand and have the resources they need to move these applications to be GPU accelerated,” he notes. 

In terms of feedback from “the end users who have actually adopted [NVIDIA’s GPU accelerated technologies], in their day-to-day work, and they’re obviously very happy because they are getting to turn around their simulations significantly faster than they’ve done before,” Pegler notes. “There’s high demand right now for the hardware pretty much worldwide.”

Dual Purpose

On top of advancing CAE applications with GPUs, Pegler brings up the dual workload that NVIDIA is seeing in companies as well. 

“GPUs can obviously be used to accelerate CAE applications, but a lot of companies are investing in GPU hardware for their AI models. It’s the same hardware fundamentally to do both.”

So there are two very practical purposes for GPU use—CAE applications and AI-accelerated applications, Pegler says. And for those in the engineering space, there are also two common questions that rise to the surface initially when considering GPU acceleration: 1) Does it work for my application? 2) And what kind of speedup do I get?

On the CAE application side, Pegler notes how in the last year some of the most commonly used computational fluid dynamic codes have adopted GPU acceleration (Siemens Simcenter STAR-CCM+, Ansys Fluent and Dassault Systemes PowerFLOW). “We also have a number of more application-specific tools that are already taking advantage of GPU acceleration (Altair, M-Star).  Given this, I’m expecting CFD to be the most common application customers will pursue for GPU acceleration,” he says.

Looking forward, Pegler says, “AI will certainly continue to push the boundaries of accelerated computing. We’re lucky in the CAE community as we can also leverage these large performance increases to accelerate our workflows.”

Despite all the touted benefits, GPU acceleration may not yet be for everyone. “Right now, not everything is GPU accelerated. We want to work closely with users to understand what their workload is and what parts of it really do benefit from GPU acceleration and what parts don’t. We want to accelerate the things that can be accelerated and guide them in the right direction,” Pegler explains.

Another factor to consider in the GPU decision-making process is cost. 

“It’s important for us with our customers to make a business case with their managers about why GPUs do offer good value for the money,” Pegler says. “With GPU hardware—if you compare one CPU against a GPU, the GPU is obviously more expensive. But if you actually compare the GPUs you need to get the same performance, the GPUs offer significantly better value for the money. It’s important to understand that business case for sure. As mentioned in the GTC 2023 keynote by our CEO Jensen Huang, GPU acceleration is transforming high-fidelity CFD by providing 9X throughput for the same cost of CPU. On the flip side, GPU acceleration is at a 9X lower cost and consumes 17X less energy for the same throughput of CPU for high-fidelity CFD.” 

Other CAE Tools Enhanced by GPU

Two big CFD tools released in the last year or so are Ansys Fluent and Siemens Simcenter STAR-CCM+, which now have GPU solvers available in their release code. “We’ve seen some great speedups with some of those tools up to 30x” Pegler notes. DS SIMULIA PowerFLOW also has a single GPU solver for its tool. 

NVIDIA also works closely with M-Star and its CFD tool, which runs natively on GPUs and has reported back some “really good performance” with the help of GPUs. Altair also has several tools running on GPUs. Altair CFD tools (ultraFluidX, nanoFluidX and EDEM ) have GPU native solverss as well. 

On the structural side, one of note, according to Pegler, is DS SIMULIA’s ’ Abaqus/Standard, which can run on GPU. “Certain applications work better than others on the structural side. For the ones that do, they’ve seen some nice speedups as well,” Pegler says.

“It’s our hope that people will walk away and realize that GPU acceleration is available now in a lot of the key industry-leading CAE tools. It offers significant speedups and also power savings. It can be a more efficient way for companies to use HPC resources,” Pegler says.


 

More Dell Coverage

Artificial Intelligence for Design and Engineering Workflows
In this white paper, learn how artificial intelligence and machine learning can improve design and simulation.
NVIDIA, nTop Strengthen 3D Solid Modeling Collaboration
NVIDIA invests in nTop, integrates OptiX rendering into nTop software.
AU 2024: Autodesk Offers Glimpses of the Future with Project Bernini
New Proof of Concept at Autodesk University Hints at AI Training Based on Proprietary Data
Rise of the AI Workstation
Given the rapid interest in artificial intelligence, more workstation vendors are rising to meet demand.
Compact Power Without Compromise: Dell Precision 3280 Compact Workstation
Dell’s latest ultra-small form factor workstation makes no compromises.
SIGGRAPH 2024: NVIDIA Releases Microservices to Empower AI Apps and Robot Training
NVIDIA NIMs expected to attract developers to NVIDIA Omniverse™
Dell Company Profile

More NVIDIA Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


#27807