Data Centre Cooling Conundrums

Saving energy and reducing carbon is a major issue in the data centre sector with increased focus on lowering PUE. We asked Alan Beresford, managing and technical director of EcoCooling to look at the techniques available to achieve these goals and the latest developments.

Conventionally, data centres have favoured DX (direct expansion) refrigeration-based cooling systems using CRAC (computer room air conditioning) units. These have a relatively low capital cost but can be inefficient.

More advanced systems incorporate a principle called free cooling with more efficient fans and controls to reduce operating costs. Modular systems allow data centres to match their cooling equipment to their loads.

Free cooling is commonplace in the design of new data centres in the UK. Key savings can occur on an average of 255 days out of 365 - when ambient air entering the outside condenser is under 14C.

But there is a conundrum: How much can we rely on fresh air for the cooling of our data centres - and how does its use affect their ASHRAE compliance?

A number of issues prevent almost all conventional systems realising their maximum efficiency.

The DX conundrum

DX (direct expansion) cooling is a refrigeration technique, used in many data centres. The most efficient application of DX cooling would incorporate ‘free cooling’ which allows the system to switch off the power-hungry compressor and use a simple, efficient, air-cooled circuit.

In theory this should provide a reasonably good coefficient of performance (CoP) of 3

So the equation should be a very reasonable:

IT Load 1.00 + UPS Losses (5%) 0.05 + Cooling (@ 3:1) 0.33 + Other (lights etc) 0.12 

 = Design PUE 1.5

However, most of these data centres are operating with measured PUEs of 1.7 to 3.3. Let’s investigate why.

Last year, In London, the ambient air was under 14C for approx. 70 per cent of the time. This is when free cooling should be used.

However, because condenser units are usually placed in groups on rooftops or in outside plant areas, the only source of air into the inlet is a mixture of ambient air with hot air from adjacent condensers.

A relatively small amount of warm air recirculation can make a big difference. A 5C change would HALVE the time when free cooling could be used. This is the biggest reason why specified performances are not achieved, but there are more:

Humidity control

Many data centres use humidity control based on out of date standards. But humidification is very energy intensive! It takes 3kW just to produce 1kg/hr of water vapour.

Changes in ASHRAE guidelines (20-80% relative humidity) have allowed many operators to discount humidification as the costs outweigh the benefits.

See if you can now turn your humidity controllers off. Permanently.

Too much equipment

Data centres generally have far too much cooling equipment operating.

Cooling is usually installed based on the maximum heat output of the IT equipment and with n+1, or 2n cooling units for redundancy.

Add these over-provisions to the fact that the data centre is rarely full and most IT equipment operates at less than 50-60 per cent of its maximum heat output. Then consider that refrigeration equipment has its maximum efficiency at full utilisation. Anything less greatly reduces the efficiency of the whole system.

Measured PUE

So, here’s how the PUE actually stacks up in nearly all small to medium data centres:

IT Load 1.0

UPS Losses (5-10%) 0.1 – 0.2

Cooling & Humidity 0.5 - 2.0

Odds & ends 0.1

Facility PUE 1.7 - 3.3

The 1MW conundrum

Even though chilled water is more efficient, if a medium size data centre starts out with say 300kW of DX units, by the time it has grown to 1MW, it is so heavily tied-in to DX technology that it’s impractical to change and the data centre has to continue to accept the poorer efficiencies and poor PUE.

On the other side of the conundrum, if a new-build data centre opts for chilled water cooling from the start, then it is going to spend most of its early years of operation with those giant units running at a fraction of their maximum load. Once again operating at way below their design maximum efficiencies.

Latest developments

For data centres, large, medium and small there have been some significant advances:

EC Fans

In old installations the air handling fans only had one setting – full speed! Consequently these used masses of power. Early variable speed drives weren’t much better.

Today’s EC (electrically commutated) fans use one quarter of the energy at half speed. So as an EC fan runs slower, it uses much less energy, while also reducing noise and increasing operating life.

Insist on EC variable speed fans in any new installation and retrofit them to legacy cooling units as fast as you can.

Free cooling

Just to confuse us, the term ‘free cooling’ is also applied to ventilation (non refrigeration) systems.

For most of the time, UK ambient air temperature is colder than that required in the data centre – so a simple ventilation system can maintain compliant conditions. This can remove the need for cooling for up to 95 per cent of the time in the UK and similar climates.

But a cooling system is needed on hot days. A very low energy solution: evaporative (or adiabatic) cooling has been introduced. The combination of a ventilation system using EC fans and evaporative cooling provides a solution with a CoP of over 20.

A combination of ‘free cooling’ and adiabatic cooling can keep a data centre in the UK within acceptable ASHRAE conditions for 98% of the time at a fraction of the energy cost.

Direct evaporative cooling

In most data centre locations, the airborne contaminant levels are sufficiently low that direct fresh air can be used. Even after passing through the evaporation process, the air is still well within the ASHRAE humidity guidelines and filtration can now be achieved to EU4 levels.

Fresh air evaporative coolers like our own CREC (computer room evaporative coolers) are highly modular from 35kW up to many MW and there’s no conundrum at the 1MW or any other level.

CRECs can simply grow with the IT load - which is how brand-new data centres with only partial utilisation can open with an operating PUE of 1.2 and be sure it will remain in a 1.2 to 1.4 envelope as the IT load grows.

CRECs are also far more efficient at partial utilisation - with a 35kW module being appropriate for applications as low as 15kW.

Conclusion

The cooling of data centres need not be complicated or expensive. There are a range of options available to meet almost all operating requirements, redundancy and efficiency needs. Bear in mind at all times that the operating costs of fresh air systems can be 95% less than a close control refrigeration based system. The ASHRAE guide clearly states there is a balance between energy use and equipment reliability.

End users can evaluate the different systems and, taking into account all of the stakeholder interests, make an informed decision on the appropriate cooling solution for their operation. I’d strongly recommend talking to manufacturers or consultants who, like ourselves at EcoCooling are members of the Data Centre Alliance, to explore these different solutions.