Tuesday, May 14, 2013

How to Upgrade your Data Center and Critical Facilities?


An aging data center may no longer be able to meet the power, cooling and structural demands of advancing technologies, but few businesses have the time or the capital to build new facilities.

Fortunately, organizations can extend the working life of their data center by renovating the facility by making changes that cost little to nothing. Data center upgrades allow a business to adopt new standards and improve existing infrastructures to introduce new technologies with better performance and more efficiency.

There are several data center design changes that can extend the life of your facilities and data center


(1) Elevate your data center temperature



The data center's working temperature has long been a subject of myth and legend, but research and initiatives from industry organizations such as  the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) have found that data centers don't need to be cooled like meat lockers. Modern servers and other computing equipment can operate reliably at elevated temperatures.

A 2008 ASHRAE document recommended a temperature range from 65 to 80 degrees Fahrenheit for Class 1 data center equipment. Recommendations in 2011 broadened the allowable temperature range from 59 to 90 degrees Fahrenheit for enterprise-class servers and 41 to 113 degrees Fahrenheit for appropriately designed servers and other equipment.

In addition, the extended temperature range also makes it possible to adopt alternative or supplemental cooling schemes (at least during certain parts of the day), such as free air or air/water economizers -- cooling technologies that might not have even been considered when your data center was first built.


(2) Upgrade servers and systems for better consolidation and efficiency



Servers consume the majority of energy in a data center -- primarily in the processors and memory components. Organizations can gain significant energy efficiency by upgrading servers to more efficient models during normal technology refresh cycles where capital is already budgeted. The newer server may also provide greater amounts of memory, allowing a virtualized server to provide much higher levels of consolidation than earlier servers.

This means the same amount of computing work can be done with far fewer servers, saving equipment capital and generating only a fraction of the heat for a data center's cooling system to contend with.


(3) Change the system layout and rack layout for power and cooling efficiency


It is a matter of Hot Aisle & Cold Aisle.

Suppose you had a traditional data center where a large computer room air-conditioning unit (CRAC) cooled the room. Now imagine that a server refresh and consolidation project slashed the number of servers by 75%. With just a quarter of the original server count in this example, it may be possible to rearrange the remaining servers in far fewer racks and use containment to enclose the remaining servers. This limits the air volume that must be cooled, significantly reducing the amount of mechanical cooling needed and allowing for alternative cooling technologies.

In other cases, under-floor cooling may be more effective by reworking the electrical cabling, network cabling and water lines that cross below the floor.

A poorly designed and haphazard layout can obstruct cooling air distribution, making more work for the mechanical cooling unit. In addition, any water distribution increases the potential for damage to electrical and network wiring, so many organizations opt to route electrical and network wiring overhead -- leaving water lines under-floor -- and may even upgrade network cabling to allow for future bandwidth improvements.

Don't overlook the rack space itself. For example, fully populating racks can concentrate more equipment in less space, making any containment -- and associated cooling -- more effective. And some racks may not be deep enough to accommodate new generations of computing equipment. This can lead to wiring congestion and airflow problems.


(4) Consider supplemental or alternative cooling schemes



Mechanical heating, ventilation and air conditioning (HVAC) systems are a staple of the modern data center, but they are also costly, energy-hungry and a potential single-point of failure in data center availability. If the cooling system fails, a data center can overheat in a matter of minutes.

Data center renovations often focus on ways to supplement or replace traditional mechanical cooling with alternative equipment or methods that are enabled by higher operating temperatures, better containment and less equipment.

Popular alternative cooling approaches include chilled water heat exchangers (water economizers), evaporation cooling and even free air cooling (air economizers).

These methods, however, require affordable environmental resources that are suited to the task and available for much of the day. For example, using cold lake water to drive a water economizer requires a nearby lake. In many cases, these alternative methods are added to supplement traditional HVAC, lowering run times and power needs.

Organizations that must continue using HVAC are taking a fresh look at the cooling system's capacity and efficiency. The potential problem is that a large, aging HVAC system runs even less efficiently if it is used infrequently; easing the cooling load on your legacy HVAC system might actually cost more and be harder on the mechanical system.

This means that raising operating temperatures and reducing the amount of computing equipment may justify a smaller cooling system.


(5) Consider availability and reliability issues in power distribution


Upgrading the uninterrupted power supply (UPS) systems to a newer model can improve UPS energy efficiency and provide more intelligent power monitoring/measurement capabilities that complement a data center infrastructure management scheme.

When a UPS is replaced, it is hopefully with a higher efficiency system, and may also become a redundant [N+1] configuration and possibly even a modular or incremental capacity solution. Power equipment upgrades may spawn broader wiring and distribution upgrades in older buildings.

It is also a common practice to upgrade in-rack power distribution units (PDUs) to add intelligent power management, along with rack temperature and humidity monitoring. With UPS and PDU upgrades together, an organization can gather energy use data and make more informed decisions about power costs in the data center.


(6) Finally, consider the availability of data center power


Organizations with aging, unreliable or overtaxed power grids may consider local co-generation options to ensure uninterrupted power. Traditional diesel generators are quickly giving way to more efficient and environmentally friendly alternatives, including solid oxide fuel cells such as Bloom Energy Servers or solar arrays to produce some amount of local electricity. If it's not possible to install local co-generation on-site, it may be possible to contract with regional co-generation providers for supplemental electricity.



About The Blogger

Strategic Media Asia (SMA, www.stmedia-asia.com) is a leading technical training and event organizer for corporations specialized in data center design & build, E&M facilities, telecom, ICT, finance and colocation. Currently, SMA delivers a series of data center trainings and qualification programs in Hong Kong, Taiwan and Macau.

All these events / training seminars are designed to support the leadership needs of senior executives (Chief Information Officers, IT Directors / Managers, Facilities Managers, company decision makers, etc.) and to provide useful and applicable knowledge.