Previously, energy efficiency wouldn’t necessarily be at the top of an information technology (IT) organisation’s priority list, but rising power costs, corporate decarbonisation commitments and an ongoing need for more hardware and equipment as well as booming data consumption is changing the way data centre operators are planning and running their facilities.
Similarly, organisations are becoming more sophisticated in managing energy. While previously, energy was tackled by implementing a number of individual initiatives, it is now common to see well-orchestrated energy strategies with a holistic approach to technology, procurement and implementation.
Data centres are complex environments that have been created to house IT equipment. Within these, the primary driver of energy consumption is the IT equipment itself. The IT equipment that supports a data centre includes communication systems, storage systems and other IT systems such as processors, server power supplies, network infrastructure and hardware, computers, Uninterrupted Power Supply and connectivity systems.
Most of the energy that is consumed within a data centre needs to pass through various stages of distribution before it can be used by IT systems. This energy is converted to heat, which is why these facilities require a significant amount of cooling.
As server densities continue to rise, cooling systems are under increased pressure to keep IT equipment and servers cool enough for them to operate efficiently. If temperatures or the humidity is too high, IT equipment can be damaged and tape media errors can occur.
Examples of these opportunities are virtualisation and the use of ARM-based processors, which are designed to perform a smaller number of types of computer instructions so that they can operate at a higher speed. This provides outstanding performance at a fraction of the power. The technological development of both these options is making them a viable solution, but they are still outside of the remit of most data centre developers.
Good practical management of data centre space is still a suitable, basic way of reducing energy consumption. Making use of aisle containment systems, installing blanking panels into unused rack slots and providing brushed grommets into raised floor penetrations are all simple, yet effective, energy saving methods that can be implemented, but they are still forgotten in many smaller facilities.
Implementing aggressive power usage effectiveness (PUE) or NABERS Energy targets will also drive more energy saving initiatives and improvements within data centres. New facilities will find it easier to implement PUE targets as high efficiency equipment can be selected to reduce parasitic load requirements.
Implementing low PUE targets, such as energy efficient lighting, in existing facilities is also achievable, but it takes more financial backing and careful planning to realise. When equipment needs to be replaced, more energy efficient options can also be chosen.
Lastly, the application of machine learning for data centres has been used to optimise cooling system setpoints for variable outside conditions. Having optimum setpoints provides a number of marginal energy gains, that when aggregated can provide as much as 10-15 per cent energy savings on the cooling system.
Free cooling opportunities are possible in many locations, especially if the air temperature that is supplied is in line with the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) guidelines (18°C-27°C).
With supply air temperatures of up to 27°C, we need outside air temperatures at 25°C or less to gain significant benefits from free cooling. Data centre managers then need to decide whether they are going to use direct or indirect free cooling. Indirect free cooling can be achieved via a heat wheel or heat exchanger as outside air contaminants or humidity levels do not restrict the use of free cooling, making it an attractive option.
There are definitely more opportunities to use this type of indirect free cooling in certain areas, particularly where the temperature falls below 19°C and the humidity is below 60 RH (relative humidity) for more than 2 500 hours per year.
Operators are still concerned about the efficiency of their data centres when they walk into a hot aisle. This perception, however, is gradually changing and people are becoming used to the idea that a hot aisle isn’t necessarily a problem.
Warmer data centres do pose a health and safety concern because anyone working in elevated temperatures cannot work for extended periods. Health and safety in warmer data centres can be managed by limiting the need to access the hot aisle, either through use of specific chimney type racks, or arranging all connections and operator works to be in the cold aisle.
Elevated temperatures need some form of aisle containment to achieve optimal efficiency and this can cause problems for code compliances. Installing a sprinkler and gas suppression system can be problematic because enclosed aisles can create an extra layer of infrastructure with the associated costs.
It is possible for some users to schedule key processing tasks to occur on an overnight cycle, however, this is limited by the business type and isn’t a workable solution for most operators. Other options to consider include:
Generally, the space availability of a data centre to implement a solar PV array could only power a fraction of the data centre’s overall energy consumption. A lot of solar panels would be needed to reduce the amount of electricity from the grid that most data centres would need.
Although solar energy could supply a data centre with energy, it would need to be ramped up to be usable by the UPS.
Big operators like Google, however, are making use of solar energy by establishing solar generation plants that offset their data centre usage on the grid. The use of small panel arrays coupled with battery storage could be used to reduce the parasitic loads on site that are non-critical such as fuel polishing, engine heaters, office air conditioning and lighting.
Other suppliers are looking at locating data centres at solar farms, gaining full advantage of direct renewable energy consumption linked to attractive power purchase agreements or even diversifying investment into the renewable energy market.
We are in the ideal position to start designing and developing more sustainable facilities. For example, Google, which owns and operates 15 data centres on four continents, aims to reduce energy use by building and operating the world’s most energy-efficient data centres. The company achieved its 100 per cent renewable energy target by buying renewable electricity directly from wind and solar farms and buying renewable power through utilities.
Companies like Google are always looking for a competitive edge. They are seeking smarter solutions in their engineering for a variety of things including data centres, corporate headquarters and research and development facilities.
There are a growing number of operators more willing to tackle sustainability challenges head-on and incorporate more progressive solutions into their data centre designs and development.
Adolfo Fernandez Benito, is a Technical Director and Aurecon’s Power Generation Practice Leader in Victoria and South Australia. He is passionate about leading the transition towards a renewable energy future, focusing on co-creating energy strategies to reduce energy consumption, costs and increasing the renewable energy mix.
Please change your browser to one of the options below to improve your experience.