Energy-Efficient Computing: Pipe Dream or Possibility?

There are no simple answers in defining energy efficiency for workstations and servers.

There are no simple answers in defining energy efficiency for workstations and servers.

PSSC Labs says every HPC cluster it delivers is a tailor-made system for the customer. Image courtesy of PSSC Labs.


It’s no secret that engineering workstations and high-performance computing (HPC) clusters use a lot of electricity. But do workstation and HPC vendors offer increased energy efficiency? Are large companies with many workstations and servers actively seeking increased energy efficiency? The answer is not a simple yes or no. 

Alex Lesser of PSSC Labs, a boutique HPC manufacturer, says positioning energy efficiency as the most important goal misses the point. 

“To say energy efficiency is a driving factor [in purchases] is a big mistake. People want to get to the answers efficiently,” Lesser says. 

Like other workstation and HPC vendors, PSSC Labs starts the conversation with a customer and addresses the end goal. 

“As we frame the conversation with a client, energy efficiency plays a role,” says Lesser. “But honestly, we live in a world of massive energy consumption. People want to get to the answers efficiently.” 

Lesser says the companies that make all the parts of a workstation or server—the power supplies, the motherboards and the computation devices—do incrementally increase their internal performance per watt. But the focus is on computational increases. 

“HPC is getting more efficient overall, he says. “With the advent of graphics processing units (GPUs), there is more [computation] going on in a box. Now we can ship 10 nodes instead of 100. Each of those 10 [nodes] uses a lot of energy, but much less than the 100 older nodes, purely because of the economies of scale.” 

“Energy use usually comes into a conversation at the end,” Lesser says. Many customers have a set limit of how much electricity can be used in the specific location where the new cluster will go. “Energy footprint varies from customer to customer, but it is not the driver of the [sales] conversation,” he adds. 

More Performance per Watt

“The sooner you get the job done, the sooner the computer goes into a sleep state, the more energy efficient it is,” says Randy Copeland, president of Velocity Micro. The constant improvement in individual components means workstations and servers are “getting more performance at the same power envelopes. More performance for the same watts,” he adds.

Copeland says Velocity Micro does have customers asking about energy efficiency as their top priority. 

“Some customers want a power solution. If they don’t render or have other high-end uses, they will take lower power,” he says. Velocity Micro offers the Pro Magics ECO 20, a workstation designed for maximum power efficiency performance, based on an AMD Ryzen-class CPU. It is not in Velocity Micro’s top 10 sales leaders, Copeland adds. 

The 2020 release of Ryzen CPUs and Radeon Pro GPUs allowed AMD to achieve 31.7% increased energy efficiency compared to similar products from 2017. Image courtesy of AMD.

Customers should focus on benchmarks to compare products, Copeland says. 

“Workstations and servers are so much faster, but power supplies are not getting bigger.” Power consumption tweaks in operating systems and BIOS operations contribute to the increased performance. 

“They are all better at power management,” he says. “They go to sleep faster and go to a deeper sleep state. Also, they are better at lowering power between keystrokes. Newer units are also better at turning off the power between tasks. Even when measured in milliseconds these tweaks benefit in power usage and the thermal envelope.” 

Intel Eats Their Dog Food 

There is an old phrase in the computer industry, “eating your own dog food.” It means to use the software or hardware you create for internal use, and is intended as a compliment. 

As the leading vendor for CPUs and other elements for computers in 2022, Intel has a big stake in how the industry addresses electrical power issues and uses its own technology for internal IT. 

“Intel’s business needs will continue to increase, and we will likely continue to see double-digit growth in the demand for compute and storage,” notes Intel in a white paper on green computing. “And yet, the global challenges facing the IT industry, such as climate change, e-waste and responsible water usage, continue to mount as well.” 

Intel sees “green computing” as multifaceted. “Green computing is more than just pursuing the lowest possible power usage effectiveness; it encompasses a broad range of initiatives and concepts.”

Intel aims to improve server and data center electrical use with what it calls disaggregated servers, a design where a server breaks up components and resources into subsystems. For example, a server can be broken into modules for compute, I/O, power and storage, which nearby servers share. 

“Disaggregation of software from hardware is more common,” Intel states, but believes disaggregation of hardware will prove important, especially as large enterprises start using fog servers, or servers placed adjacent to the source of data instead of in a central location. 

“By adopting disruptive technologies like disaggregated servers and constantly thinking outside the box about how we operate our data centers, Intel IT is proving that the concepts of operationally efficient (TCO) and environmentally friendly are often more closely linked than many people may suspect,” the Intel white paper further notes.

Feds Ponder Energy Efficiency 

The U.S. Department of Energy (DOE) publishes a computer purchasing guide for federal agencies, which are required by law to buy energy-efficient PCs. Guidance is based on Energy Star ratings from the U.S. Environmental Protection Agency. The guidance says an Energy Star-qualified enterprise computer saves money if priced no more than $43 compared to the least efficient equivalent.

In explaining cost-effectiveness as a way to guide purchasing, the agency notes, “an efficient product is cost-effective when the lifetime energy savings ... exceed the additional up-front cost (if any) compared to a less efficient option.” 

Government agencies that need high-performance computing from workstations or HPC clusters are allowed to apply for exceptions from the energy regulations. The agency must show there are no “Energy Star-qualified products available to meet functional requirements, or that no such product is life cycle cost-effective for the specific application.” 

To improve energy efficiency with exempted computers, the DOE suggests users apply tips and tricks such as power management features that are either built-in or available as separate software; manually shut off computers at night; only use mobile computers in battery mode, plugging them in only to recharge; and take advantage of “hibernate” features, which save active programs and files before shutting down. 

Update Your Workflow 

Velocity Micro’s Copeland has noticed many of his engineering customers use a workflow where rendering is sent to a server, instead of the engineer creating it on the local workstation. The problem, Copeland says, is that the engineer often has a workstation that is years newer than the server, and could crank out the rendering much faster, saving time and energy. 

“Using a fast workstation can dramatically reduce server workloads by performing renders locally, and often dramatically quicker,” Copeland explains. “Legacy servers are often several years old, and much less efficient as current workstations.” By using the faster local workstation to crank out the rendering, the company is “reducing the load on older equipment and letting servers sleep more,” he adds. 

Doubling Production per Component

HPC and supercomputer maker Supermicro sees the journey to energy-efficient computing as a balancing act between overall efficiency and decreased direct electricity use. The company went from 155W two years ago, to 350W in mid-2021, according to Erik Grundstrom, director of business development at Supermicro. 

“We have doubled production per component,” Grundstrom adds. “DIMMS [dual in-line memory modules] are going to 4800mhz. All components consume more power. Time to complete is one thing, but overall computing is through the roof. Computers are the gas guzzlers of today. But [the] capabilities are massive.” 

Supermicro sees building energy-efficient HPC clusters as a balancing act between overall efficiency and decreased direct use of electricity. Image courtesy of Supermicro.

Supermicro sees some customers improving overall performance by consolidating multiple workstations into a single server. These virtual workstations are “very efficient,” Grundstrom says. “The AC-to-DC conversion [required by all computers] is losing less power as time goes on. Yet, each component is using much more power.” 

Workflow modernization through virtual workstation use is one way to lower power consumption. Grundstrom recommends engineering teams take a close look at NVIDIA Omniverse, which is now available as an open beta. NVIDIA describes Omniverse as “an easily extensible, open platform built for virtual collaboration and real-time physically accurate simulation.” 

Using Omniverse will mean “No more import/export nonsense,” says Grundstrom. “CAD or CAE or Maya or other media workflows [allow everyone to] work on the same project in real time. It is a much more efficient approach, using multiple users to create on the fly in real times.”

A Word About CPUs

No discussion of computing efficiency is complete without mentioning the CPU. The next generation of workstation-class CPUs from Intel is the Alder Lake generation. 

Intel says Alder Lake has redefined x86 technology with a new hybrid architecture. The 16 cores are divided into two categories: Performance (P-Cores) and Efficiency (E-Cores). P-Cores consume more energy and support hyperthreading. In boost mode, the top-end Alder Lake can hit 241W, less than a previous generation Core i9 (250W). Intel says the new design efficiency means an Alder Lake can deliver the same peak multithreaded performance as a previous-generation i9-11900K while consuming only 65W. Windows 11 includes special software to support dividing tasks for P-Cores and E-Cores. 

AMD raised eyebrows and expectations when it launched the Threadripper line of CPUs in 2017. They offered core counts never before seen in a single-socket CPU design, made possible by use of 7nm process technology. 

The current Threadripper Pro was reportedly the first CPU to market supporting the use of PCIe Gen 4, the internal architecture standard for moving data through the motherboard. Data moves through lanes; PCIe Gen 3 offered up to 64 lanes, while PCIe Gen 4 offers 128. AMD claims using PCIe Gen 4 offers “twice the I/O performance over PCIe Gen 3.” 

Both Threadripper Pro and Intel’s Alder Lake CPUs support PCIe Gen 4; Alder Lake also supports PCIe Gen 5, but currently no computer vendors are building systems based on this standard.

More AMD Coverage

Accelerating Electric Vehicle Development with Multidisciplinary Simulation and High-Performance Computing
In this new Making the Case guide, learn how a unified approach to design and multidisciplinary simulation from Dassault Systèmes, combined with high-performance computing powered by AMD EPYC™ processors, can accelerate EV design.
Next-Gen Workstations Meet the Need for Speed
Artificial intelligence, multicore CPUs, GPU acceleration and more continue to drive engineering innovation.
AMD Introduces Versal RF Series Adaptive SoCs
New SoCs offer high compute in a single-chip device with integrated direct RF-sampling converters, AMD reports.
AMD Powers Fast Supercomputer, El Capitan
El Capitan touted as the first exascale-class machine for the National Nuclear Security Administration (NNSA) stands as a computing resource for the NNSA Tri-Labs — LLNL, Los Alamos and Sandia National Laboratories.
New Engineering Design Center for AMD Opens in Serbia
AMD expands in the Balkans with a new design center to improve software and AI capabilities.
Rise of the AI Workstation
Given the rapid interest in artificial intelligence, more workstation vendors are rising to meet demand.
AMD Company Profile

More Intel Coverage

More NVIDIA Coverage

More PSSC Labs Coverage

Share This Article

Subscribe to our FREE magazine, FREE email newsletters or both!

Join over 90,000 engineering professionals who get fresh engineering news as soon as it is published.


About the Author

Randall  Newton's avatar
Randall Newton

Randall S. Newton is principal analyst at Consilia Vektor, covering engineering technology. He has been part of the computer graphics industry in a variety of roles since 1985.

  Follow DE
#26071