Best Practices – Data Center and POP Power Cooling

1. Measuring Performance

The first step in improving efficiency in a POP or data center is to track energy usage over time, concentrating on two metrics:

Power consumption of IT equipment: The energy required by servers, storage, and networking devices—the machines that conduct IT work—is referred to as IT equipment energy.

Facility overhead energy: This is the energy used by everything else in the facility, such as power distribution, cooling, and lighting.

The measurement used to compare these two types of energy is power usage efficiency or PUE.

PUE =IT Equipment Energy + Facility Overhead Energy / IT Equipment Energy

For PUE to be valid, it must be measured over a lengthy period. Consider both quarterly and twelve-month Performance. Snapshots of merely a few hours don’t help make significant energy savings.

2. Optimize Air Flow

The IT equipment in a conventional data center is structured into rows, with a “cold aisle” in front where cold air enters the racks and a “hot aisle” in the back where hot air is expelled. CRACs, or computer room air conditioners, pump cold air into the cold aisle, where it passes through computer and network equipment before returning to the CRAC. The main source of facility overhead energy is cooling.

Preventing hot and cold air from mingling is the most critical stage in optimizing airflow. There is no one-size-fits-all approach to this. Using your imagination to develop simple ways to block and redirect air can drastically minimize the amount of cooling needed. Installing blanking panels in vacant rack slots and securely sealing gaps in and around machine rows are examples of this. It’s similar to the process of weatherizing your home.

To establish a more uniform thermal ‘profile,’ it is also necessary to eradicate any hot areas. Localized hot areas cause CRACs to turn on unnecessarily, which is a concern for machines. The use of computer modeling and proper placement of temperature monitors allow to locate and reduce hot spots swiftly.

3. Turn up the Thermostat

IT equipment has long been thought to work best at low temperatures of 15°C/60°F to 21°C/70°F. The American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) recommends cold aisle temperatures of up to 27°C/81°F, which we’ve found safe for equipment. Most IT equipment manufacturers recommend operating temperatures of 32°C/90°F or higher, so there is plenty of room for error. Furthermore, most CRACs are programmed to dehumidify the air to a relative humidity of 40% and to reheat the air if the return air is too cold. Significant energy savings can be achieved by raising the temperature and turning off dehumidification and reheating.

CRACs can run more efficiently at higher intake temperatures when the cold aisle temperature is raised. If the plant incorporates air- or water-side economization, it also provides more days of “free cooling” when mechanical cooling is not required.

The simple act of changing the temperature in a single 200kW networking room from 22°C/72°F to 27°C/81°F might save tens of thousands of dollars in annual energy bills.

4. PDU Daisy Chaining

Daisy chaining is supported by the majority of current, best-of-breed power distribution units (PDUs). Previously, each PDU required its network port; however, by hardwiring multiple PDUs together using Ethernet cables, four PDUs can suddenly share a single IP address. This reduces the network infrastructure required to connect PDUs, making it considerably easier to organize power infrastructure clusters.

Warning: make sure the PDU you buy is fault-tolerant. In other words, if one connection fails, the other PDUs connected to it will remain operational. This also means that maintenance on one PDU will not affect the others to which it is linked.

5. Color-Coded PDUs

When it comes to power infrastructure, investing in color-coded PDUs is one of the simplest things any data center management can do. Color-coded, locking receptacles are used to categorize sub-circuits further, and color labels are used to design power feeds. As new equipment is added, the chance of a power feed short is reduced since technicians will have an instant estimate of how much electricity is flowing through a given circuit or sub-circuit.

6. Remote Power Monitoring

Colocation facilities require utility-grade power metering to bill clients appropriately. This data is gathered and sent to a preset endpoint or endpoints via remote power monitoring. Remote power monitoring helps uncover inefficient equipment that needs to be changed, as well as zombie servers, in addition to more accurate billing (which each year in the U.S. consumes enough energy to sustain three power plants).

7. PDU Environmental Intelligence

Power monitoring isn’t the only type of audits a best-of-breed PDU can perform. To begin with, the flexibility to quickly swap out intelligence units on a PDU guarantees that your power monitoring is always the best available. Environmental monitors that measure humidity and temperature in real-time can also be used with interchangeable monitoring devices (IMD). These, in combination with remote power monitoring, provide data center operators with a complete picture of their PDUs.

8. Treat your UPS like a VIP

Your uninterruptible power supply (UPS) serves as the last line of defense between business continuity and the dreaded downtime. It doesn’t take much to throw a company’s operations into disarray. Regardless of how often you utilize the backup power source, make sure you implement remote power and environmental monitoring for your UPS room. Also, make sure you invest in dependable transfer switches since these will keep your equipment functioning smoothly during a UPS switchover.