In fact, as IBM officials said in a press statement, the honor represents “the largest portfolio of data centers from a single company to receive the recognition.”The idea is to reduce energy consumption “in a cost-effective manner without decreasing mission critical data center functions,” IBM officials said, using certain established best practices.
Great. What might those be?
Speaking broadly, IBM officials rattled off a list of general areas where one can find energy efficiencies in contact centers — energy-efficient hardware, free cooling, cold aisle containment and the like.
A bit more specifically, IBM officials said, one factor that weighed heavily in their winning the EU award is that many of their data centers support cloud computing. This isn’t only to save energy, of course, as the cloud is in high demand these days for its efficiencies, flexibility and profitability and other good common-sense business reasons.
Analytics are a huge part of IBM’s energy-saving success. The company uses Mobile Measurement Technology, an in-house product of IBM Research, using “thousands of sensors to record and analyze temperatures and air flow to detect hot and cold spots,” company officials explain, to get energy flow insight leading to the intelligence that lets IBM “efficiently cool data centers with a high measure of security and reliability and significant reduction in cost.”
The company believes in replacing older hardware equipment with more energy efficient servers, consolidating servers: fewer and more efficient servers = lower energy usage.
So that’s Big Blue’s overall approach. Solid, basic principles at work: Use the most energy-efficient servers you can find because they’ll save you money in the long run; consolidate your server needs; use analytics to find where you can cut down on costs within the data center itself; and take advantage of cloud computing where possible.
Bully for IBM. Does anybody else use a different approach?
The Federal Energy Management Program (FEMP) issued a white paper titled “Best Practices Guide For Energy-Efficient Data Center Design” in March 2011. It addresses energy efficiency across the enterprise, breaking down its recommendations in seven areas.
Information Technology (IT) Systems
This is a good place to start because “IT equipment loads can account for over half of the entire facility’s energy use.” The white paper identifies rack servers as a major culprit, saying they account for “the largest portion of the IT energy load in a typical data center,” taking up lots of space, and drawing full power even when running at 20 percent use or lower, which according to the paper is, in fact, most of the time.
The FEMP recommends looking for servers with variable speed fans, as they can adjust to how much power is needed to actually cool the server. Throttle-down devices are helpful as well, reducing energy consumption on idle processors via “power management.” Use multi-core processor chips where possible, and consolidate your IT system redundancies — “consider one power supply per server rack,” instead of power supplies for each server.
Grouping equipment with similar heat load densities and temperature requirements means you can cool them more efficiently, the paper says, pointing to virtualization as another way to find efficiency.
Environmental Conditions
Yes, these matter. The FEMP cites the American Society of Heating, Refrigerating and Air-Conditioning Engineers and Network Equipment Building System, which has published recommendations for “environmental envelopes” for inlet air for IT equipment. Done correctly, it can help reduce overall energy consumption, and the recommendations are presented in the paper with cool charts and graphs we really can’t do justice to here.
But bear in mind that variable speed fans in servers are guided by internal server temperature, so if your data center’s using inlet air conditions higher than what’s recommended, well, the fans aren’t going to do the best job they can saving you money.
Air Management
Another important yet frequently overlooked area. Basically, what this refers to is the way you configure the center to get rid of as much air mixing between cool and hot as possible. You have lower operating costs if the hot air being expelled from the equipment isn’t recirculated to the machines again. The cooling air needs to be delivered to the servers as efficiently as possible.
No, it’s not a horribly sexy aspect of data center efficiency, but the money you save is.
The paper talks about cable congestion reducing total air flow, and allowing hot spots to develop. It recommends greater under-floor clearance, of at least two feet for raised-floor installations, and having a “cable management strategy” to minimize air flow obstructions, with possibly a cable mining program, involving the removal of abandoned or inoperable cables. Aisle separation’s a good idea too, with cool air aisles on one side of a row of servers and hot on the other. Those flexible plastic strips you see at supermarket refrigeration sections can really help here.
Cooling Systems
Probably one of the first things you thought of when you thought of data center energy efficiency, but as we hope you’ve seen by now, other considerations play a considerable part. The most common type of system here for smaller data centers would be a direct expansion (DX) system, CRAC units readily available off the shelf. Rooftop units are not pricey and work well, too.
Central air handler systems provide better performance, the paper notes, observing that they can “improve efficiency by taking advantage of surplus and redundant capacity to improve efficiency.”
Chilled water systems are another option, with a high-efficiency VFD-equipped chiller with condenser water reset recommended by the FEMP as “the most efficient cooling option for large facilities.”
There’s much more in the paper about other options for cooling systems.
Electrical Systems
Keep in mind both initial and future loads here, the FEMP white paper warns, adding that efficiencies can range widely from manufacturer to manufacturer. Use uninterruptible power supply systems for backup power, and for maximum efficiency determine exactly what equipment actually needs UPS and which doesn’t.
Demand response is voluntarily lowering energy usage during peak demand, and your utility will probably offer you some incentive to sign up for a program like that. Many companies simply switch to backup power during peak times and pocket the savings from the lower rates.
Using DC power distribution will save conversions, but it’s expensive to install since it’s still not widely-used. And consider savings you can find with lighting — think about what space really needs to be illuminated all day and what space doesn’t. Zone occupancy sensors can really help you reduce your lighting costs and overall energy costs.
Other Energy-Efficient Design Opportunities
The FEMP paper provides a few more things to think about:
- On-Site Generation. With a constant electrical demand this option can make sense. They’re an alternative to grid power. Some places let you sell self-generated power back to the grid, which lowers capital expenses.
- Co-Generation Plants. This is using a power station or similar technology to help produce electricity, and its waste heat can run a chiller to provide cooling.
- Standby Losses. Reduce these, and use waste heat from the data center to minimize losses by block heaters. Here’s one place solar panels might make sense.
- Waste Heat. This can be used to provide cooling — nifty irony there, no? Done correctly, the FEMP says, using absorption or adsorption chillers, your chilled water plant energy costs can be cut by at least 50 percent. Adsorption chillers require less maintenance than absorption models, but are new to the U.S. market.
Data Center Metrics and Benchmarking
You do this to track performance and see where you can find improvements. The paper provides links to various benchmarks.
Measuring Power Usage Effectiveness and Data Center Infrastructure Efficiency is a good place to begin benchmarking, not that they represent the entire, overall efficiency of your whole data center, as the paper says, but rather the “efficiency of the supporting equipment within a data center.” Which is still quite a lot.
Energy Reuse Effectiveness is another area for productive benchmarking, as is the Rack Cooling Index and Return Temperature Index, your Heating, Ventilation and Air-Conditioning System Effectiveness and the Airflow Efficiency, not to mention the Cooling System Efficiency.
On-Site Monitoring and Continuous Performance Measurement is an important area to benchmark, and the paper provides resources to assist with this as well.
Adopted from http://news.thomasnet.com