Power density – it’s a contentious term in the data center industry and has been for some time. Ostensibly, it is used to denote the capabilities of a data center.
The higher a particular facility’s power density, the better it could support the needs of clients and of end users. The higher the power density, the more power could be supplied per rack.
However, the concept has evolved a great deal from what it once was, and it will continue to do so as technology changes to meet user needs. In this article we look at data center power consumption.
Power Density In the Past
Today, we measure power density in kilowatt kW/rack, but it wasn’t always that way. Once, and for quite a long period, the industry measured power in watts per square foot (or square meter for those outside the US).
Here’s the deal:
The reason for the shift away from watts per square foot and toward kW per rack is simple – data centers experienced a growing need for higher density and higher levels of redundancy.
That demand is still ongoing. However, there are other causes, too.
For instance, watts per square foot does not tell anything about the number of racks or cabinets available within the facility.
It does not define anything about what is included in the power calculation, nor does it provide information about the variation in power from peak to average or even average over time.
Finally, it does not speak to data centers that grow over time, or have a changing growth plan.
Evolution of Power Density and Data Center Energy Consumption
Let us now look at data center energy consumption in the past.
Data center power density has changed considerably over a relatively short period of time. The first data centers were built during the late 1970s and early 1980s.
I’m not going to lie to you...
In most cases, they were not data centers in the modern sense of the word. They were more akin to in-house IT departments, as they existed in conjunction with other business systems.
That began to change in the 1980s as better connectivity evolved (the tier system).
In the early days, power density wasn’t that big of a deal. There simply was not enough demand on resources to necessitate serious power supply issues.
As connectivity improved, that changed. To illustrate this point, consider the fact that 2 to 4 kW per rack was once considered high density.
By 2016, that had been turned on its head – 10 to 12 kW per rack was considered high density. Fast forward just a couple of years and you find that things have changed once more.
There are undoubtedly some challenges associated with increasing power density. More on this later.
Today, the average power consumption for a rack is around 7 kW depending on the data center you’re looking at.
However, almost two-thirds of data centers in the US experience higher peak demands, with a power density of around 15 or 16 kW per rack. Some data centers may actually hit 20 or more kW per rack.
The bottomline?
This evolution in power supply and demand leads to a number of critical challenges that must be addressed.
Challenges with Increasing Power Density
One of the most critical challenges to address with increasing power density within data centers is cooling. Today, air cooling is still adequate for most data centers, but that will not remain the case for much longer.
Alternative cooling technologies and methodologies are being developed against that need. One of those is liquid cooling, which is available, but little used due to costs and the perception that water and data centers should not mix.
Green technology, the use of “free cooling” rather than mechanical chillers, adiabatic cooling and the use of solar and wind power to power cooling systems are also being explored.
Ultimately, the trend with data center power density is clear – it’s growing, and it’s only going to get higher as more demand appears.