Research has found that data centres can reduce their energy usage by up to 30% simply by altering around 30 lines of code in the Linux kernel’s network stack. Scientists from the University of Waterloo in Canada identified inefficiencies in the way servers process incoming network traffic.
The breakthrough comes from interrupt request suspension, a technique that optimises CPU power efficiency by reducing unnecessary interruptions during high-traffic conditions. Typically, when a new data packet enters the network, it triggers an interrupt request, causing the CPU core to pause its current task to process the data, slowing things down.
The new code reduces interrupt requests by allowing the system to actively check the network for new data packets when needed instead of waiting for each individual interrupt. However, since this approach is power-intensive, the system reverts to interrupt handling when traffic slows.
By refining how the kernel handles IRQs, data throughput improves by up to 45% while ensuring tail latency remains low. In other words, the system can handle more traffic without delays for the most time-sensitive operations. The modification has been incorporated into Linux kernel version 6.13.
“We didn’t add anything,” said Cheriton School of Computer Science Professor Martin Karsten in a press release. “We just rearranged what is done when, which leads to a much better usage of the data centre’s CPU caches. It’s kind of like rearranging the pipeline at a manufacturing plant, so that you don’t have people running around all the time.”
Data centres will be responsible for up to 4% of global power demand by 2030, driven by AI, at least in part. Training OpenAI’s GPT-4, with 1.76 trillion parameters, consumed an amount of energy equivalent to the annual power usage of 5,000 U.S. households. This figure doesn’t even include the electricity required for inference, which is the process in which the AI generates outputs based on new data.
SEE: Sending One Email With ChatGPT is the Equivalent of Consuming One Bottle of Water
Data center operators arguably have a responsibility to reduce their carbon footprint, yet it does not appear to be a priority. A report from the Uptime Institute found that fewer than half of data center owners and operators even track key sustainability metrics such as renewable energy consumption and water usage.
Individual businesses don’t appear to be motivated to take a stand against their data centers’ energy-intensive practices, either. In fact, recent research found that nearly half of businesses are relaxing sustainability goals to allow for their AI expansions.
Tech giants have also come under scrutiny. In July, Google came under fire after its annual environmental report revealed that its emissions had increased by 48% in four years, largely due to the expansion of its data centres to support AI developments.
Aoife Foley, senior member of the Institute of Electrical and Electronics Engineers and engineering professor at Queen’s University Belfast, told TechRepublic in an email: “Modern enterprises continuously generate and accumulate vast amounts of data. This includes routine activities across enterprise systems, machines, sensors, and demand-side digitalisation.
“All of this data comes in multiple forms – whether redundant or critical. However, the majority is unstructured and inert content, commonly referred to as ‘dark data’ which is becoming more prevalent. The result is a large volume of digital data that needs to be stored, most of which will not even be accessed later.
“Those managing data centres and server rooms must strive for a high standard of energy efficiency, demonstrated through aggressive power use effectiveness targets. Achieving sustainability means addressing environmental considerations during solution design as well as during the build. Solutions must meet pre-defined and agreed environmental sustainability criteria. This includes filtering dark data, removing unnecessary information from storage and relying upon ‘greener’ energy sources.”