Next-Gen Workstations Meet the Need for Speed
Artificial intelligence, multicore CPUs, GPU acceleration and more continue to drive engineering innovation.
Latest News
December 17, 2024
For engineers, the past two years have been a bonanza for compute power. New classes of professional workstations offer faster CPUs with higher core counts, powerful graphics processing units (GPUs) and support for faster simulation, high-end graphics and artificial intelligence (AI) capabilities. Software vendors in the space have expanded their use of GPU acceleration across design and simulation applications, and now even computer-aided manufacturing (CAM) software vendors are following suit.
Although there has been a shift to more users taking advantage of cloud-based computing and applications, as well as high-performance computing clusters and other options, engineering workstations still play a central role in design work.
NVIDIA has continued to expand the capabilities of its GPU families for workstations and data centers, while centering a lot of its efforts on the AI space. AMD also released more powerful Radeon GPUs, as well as new versions of its Epyc and Threadripper CPUs, and Instinct accelerators. Intel, meanwhile, announced its Gaudi 3 AI accelerator for data centers, as well as next-gen Intel Xeon 6 processors along with an initiative to develop an open enterprise AI platform in conjunction with its software partners.
The AI PC also emerged as a new branding category, with a number of hardware vendors emphasizing their capabilities for supporting AI workflows and Microsoft Copilot+.
We reached out to some industry insiders to discuss the latest engineering workstation and compute trends, and take a look at what might be coming in 2025.
Over the past few years, there have been significant advancements in compute power/capacity for workstations and servers. For our engineering audience, what do you see as the most important developments when it comes to improving engineering/design workflows?
Matt Allard, Director of Strategic Alliances at Dell Technologies: It’s clear from the significant advancements that we have not reached the upper limits of the performance a workstation can achieve. But equally exciting is that workstation level performance is increasingly possible in small, mobile form factors, which are more energy efficient than ever. The most significant development in my thinking is the availability of workstations that deliver solid performance but at more attractive price points for all those roles that I think of as “workstation adjacent.” PLMs, manufacturing engineers, field service people and so on…all are increasingly dealing with the complex 3D models our customers are creating every single day and which are making their way into all aspects of the product lifecycle. Those roles who may be struggling with traditional PCs can get a significant performance lift by operating with a more accessible workstation.
Himanshu Iyer, Principal Product Marketing Manager of Manufacturing at NVIDIA: The manufacturing industry is undergoing a major transformation. The convergence of data analytics, artificial intelligence, robotics, the Internet of Things, computer vision and edge computing is changing the way products are designed, developed, and delivered. Product designers and engineers, who are often part of geographically dispersed teams, are asked to work more collaboratively across disciplines while dealing with shorter product lifecycles and higher product complexity. GPU-powered technologies, such as generative design, real-time engineering simulation, AI/deep learning, augmented reality, virtual reality, photorealistic rendering, and graphics virtualization, are the new tools of the trade. These tools power advanced product design workflows that enable manufacturers to create innovative, highly differentiated products to gain a competitive advantage.
Andy Parma, Product Director, AMD: There’s been a lot of development over the last few years in the workstation market including the introduction of a new workstation market segment, the Super Single Socket workstation, and the development of AI workstation software from major ISVs [independent software vendors], like Microsoft’s Copilot+.
Super Single Socket workstations have effectively ended the era of two-processor workstations by delivering higher performance in more compact form factors for users needing smaller, quieter and more energy efficient workstations. The new workstation category delivers leadership single-threaded performance, multi-threaded performance, application performance, and GPU performance.
Additionally, with innovations like AI-enabled software and applications, especially the introduction of Copilot+ in select workstations in 2025 … engineering and design workflows will further be augmented with new capabilities helping users optimize designs, build faster and improve collaboration.
How are software vendors/ISVs in the engineering space taking advantage of these advanced computing capabilities? Where do you see opportunities for engineering software vendors to make further improvements?
Allard: ISVs have a long history of extracting maximum performance from the workstations they are recommending for their users. This includes coding some features for more CPU cores (i.e., multithreading) and utilizing the power of discrete GPUs for accelerated visualization, ray-trace rendering and many parallel compute functions. I fully expect ISVs to continue in this vein but now with an additional focus on the AI-specific circuitry in GPUs and NPUs [neural processing units] to accelerate the rapidly emerging set of AI capabilities being released in their applications. While ISVs may initially introduce AI features in the Cloud, they quickly recognize that they can meet the performance, latency and security expectations of their customers while simultaneously offsetting their own costs of running those services in the Cloud by enabling AI features to run locally on their customers’ workstations.
Iyer: Design and simulation software vendors are beginning to incorporate AI and machine learning into their solutions and are taking advantage of powerful new engineering workstations and servers equipped with professional GPUs that offer a massive upgrade in computing horsepower. Software vendors are implementing AI-based features for tasks like generative design and rendering denoising. Computer-aided engineering software is leveraging GPUs for faster structural, fluid dynamics and other types of simulations. Many computer-aided design and rendering/visualization applications now offer GPU-accelerated real-time ray tracing for photorealistic rendering. Vendors are also leveraging advanced GPUs to enable more immersive AR/VR experiences for design review and collaboration.
Parma: ISVs are increasingly utilizing high-performance computing workstations and AI to deliver new improvements for their customers. One example is Ansys, which is [leveraging] Super Single Socket workstations to accelerate applications and decrease time to insight. AMD and Ansys have collaborated to optimize Ansys CFX, Fluent, HFSS and Mechanical for optimal performance on AMD Ryzen Threadripper PRO processors.
AI will be a big area for ISVs to lean into. While there have been incredible advancements made in the past year, we are still in the early stages of the AI era. In the coming year, it will be critical for ISVs to leverage the performance improvements of Super Single Socket workstations as well as AI-enabled workstations to create new opportunities for additional optimizations and enhancements for their users.
How is the use of AI tools in some engineering workflows affecting compute requirements?
Allard: The initial launch of locally-processed AI tools in engineering workflows has largely meant that customers with workstations with discrete GPUs realized an added benefit of AI acceleration on the hardware they already use. As AI tools proliferate, especially Generative AI (Gen AI) backed tools using large language models (LLMs), it is likely the demand for AI processing power and GPU VRAM to handle multiple embedded LLMs simultaneously will push customers further up the GPU product lines to more capable GPU models with larger memory.
Iyer: The use of AI tools in engineering workflows is leading to several effects on compute requirements, including: deeper integration of AI tools throughout the design process; more extensive use of AI-powered real-time simulation using surrogate models or reduced order modeling for instant design feedback; and physics-based, AI-enabled digital twins to design, simulate, operate and optimize products and production facilities.
To create these realistic digital twins and power AI-enhanced design and simulation workflows, manufacturing organizations need accelerated computing platforms and an enterprise-grade AI software stack.
Parma: AI is incredibly compute intensive and as workstation users adopt more local AI-enabled experiences, their legacy systems may not have the computational resources, especially when running complex models or large datasets. However, Super Single Socket workstations, with massive core counts and increased performance enables Llama and other LLMs to be run locally on the CPU, helping to reduce latency with cloud-based AI applications.
Finally, with the introduction of NPUs to the workstation market, AI tasks can be moved off of the CPU or GPU and onto the highly optimized NPU, ensuring that even the most demanding AI workloads run smoothly.
What trends do you see being particularly important in 2025 when it comes to engineering workstations, servers, cloud resources or displays?
Allard: Dell anticipates ongoing innovation across all aspects of our PC and workstation designs, especially looking forward to exciting new products from our technology providers in the form of new CPUs and GPUs—not to mention memory and drives.
Iyer: With further integration of AI across the entire design and engineering process, companies will seek powerful workstations and servers capable of tackling the ever-increasing demands of applications and complex datasets. As the emphasis on energy efficiency and sustainable computing grows, accelerated computing solutions that deliver the computational performance, scale and efficiency needed for AI-enhanced workflows will overtake legacy computing architectures that can’t support parallel processing. Complex AI model development and training will require powerful compute infrastructure in the data center and on the desktop with the flexibility to embrace cloud computing for exponential scaling.
Parma: The continuing expansion of AI use in engineering workstations will be a significant driver of innovation in 2025. As new processors, the first PCI Express 5.0 GPUs, and the first wave of Copilot+ workstations are introduced, engineers will have more options than ever to accelerate their workflows.
More AMD Coverage
More NVIDIA Coverage
Subscribe to our FREE magazine,
FREE email newsletters or both!Latest News
About the Author
Brian AlbrightBrian Albright is the editorial director of Digital Engineering. Contact him at de-editors@digitaleng.news.
Follow DE