Merry Christmas & Happy Holidays!
Best Practices for Critical Facilities Design, Efficiency and Operations
Thursday, December 22, 2016
Season's Greetings
Warmest thoughts and best wishes for a wonderful new year. May peace, love and prosperity stay with you throughout 2017.
Merry Christmas & Happy Holidays!
Merry Christmas & Happy Holidays!
Tuesday, December 20, 2016
Control Systems for Data Centers
System uptime is the crucial objective of data center operations. Distribution Control System (DCS) proposes design topologies and attributes for critical network, electrical, and mechanical systems to attain these availability. If the control system does not respond quickly and appropriately, a data center may experience a destructive and rapid failure - even if redundant chillers, air handlers and power sources have been installed.
Yet in spite of these stringent requirements and the serious consequences of failure, most data centers are built with the same commercial DDC (Direct Digital Control) style control systems used in office buildings. This is in contrast to other mission-critical environments (semiconductor cleanrooms, pharmaceutical labs), where industrial controls, such as PLCs (Programmable Logic Controllers) or even Distribution Control System (DCS), perform many of the same functions.
We are going to provide an overview of the main areas where industrial and commercial style controls differ, and to help data center owners and system designers understand the value to be gained from industrial PLC control systems.
PLC systems offer more robust options
Compared to commercial systems, industrial control systems feature more accurate and rugged sensors and devices, signal types and wiring methods. Industrial controllers are more robust, have higher performance, faster networks and more flexible programming capability. Redundancy options with industrial controls can address the most difficult control issues without relying on "passive automation."
Passive automation involves providing distributed control in which small, inexpensive controllers can be dedicated to individual machines or processes. In this case, the loss of a single controller cannot shut down the entire facility if there are redundant pieces of equipment installed each with their own controller.
Commercial systems typically use a mix of "unitary" controllers to control a single piece of equipment, with larger building controllers used for facility-wide programming tasks or monitoring general I/O points. Industrial systems use PLCs, which also come in a range of sizes and intended applications. The differences between these controllers can be discussed in terms of form factor and physical robustness, I/O type and capacity, and processor programming capability and flexibility.
Performance, flexibility and higher cost characterize PLC systems
The difference between PLC and DDC programs is essentially one of flexibility. The programming functions in a PLC are more numerous and powerful. There is a richer instruction set for math, logic and bit manipulation. Many PLCs allow encapsulation of instructions to create user-defined function blocks. This is a powerful tool that sophisticated users leverage to create simple, re-usable code. These differences allow creation of more sophisticated and powerful programs. Finally, modification of PLC programs can be done "on-line," which means the controllers do not need to be stopped if the program needs to be changed.
The two types of systems conceptually can look very similar. The distinction, in a word, is performance. Industrial systems are designed for "real-time" control. Like a DDC, a PLC program looks at sensor data input, performs logic or calculations and writes outputs. However, the speed of processing and communication in PLC systems allows inputs to be read from anywhere in the system, logic solved, and outputs to be written to anywhere else in the system in real-time. Scan rates for PLCs, even in large programs with distributed I/O, are generally measured in milliseconds. DDCs have program execution times measured in seconds.
There is a cost premium for industrial control systems. A rule of thumb for control systems is this: Industrial controls total installed cost is approximately $3000/point. Commercial systems cost approximately $2000/point. For reference, a recent data center project was completed with 500 I/O points. This represents a difference of $1.5M versus $1M. This estimate does not take into account the difference in maintenance and service contract costs (which is typically higher for commercial controls) but is a reasonable idea of the difference in up-front costs.
Owners and system designers should not expect to achieve industrial control system performance on a commercial control system budget. But consider: The control system represents just 1-2% of the total facility cost. With today's ever more demanding environments, it pays to consider the long-term value represented by the increased performance, flexibility and reliability of PLC systems.
Yet in spite of these stringent requirements and the serious consequences of failure, most data centers are built with the same commercial DDC (Direct Digital Control) style control systems used in office buildings. This is in contrast to other mission-critical environments (semiconductor cleanrooms, pharmaceutical labs), where industrial controls, such as PLCs (Programmable Logic Controllers) or even Distribution Control System (DCS), perform many of the same functions.
We are going to provide an overview of the main areas where industrial and commercial style controls differ, and to help data center owners and system designers understand the value to be gained from industrial PLC control systems.
PLC systems offer more robust options
Compared to commercial systems, industrial control systems feature more accurate and rugged sensors and devices, signal types and wiring methods. Industrial controllers are more robust, have higher performance, faster networks and more flexible programming capability. Redundancy options with industrial controls can address the most difficult control issues without relying on "passive automation."
Passive automation involves providing distributed control in which small, inexpensive controllers can be dedicated to individual machines or processes. In this case, the loss of a single controller cannot shut down the entire facility if there are redundant pieces of equipment installed each with their own controller.
Commercial systems typically use a mix of "unitary" controllers to control a single piece of equipment, with larger building controllers used for facility-wide programming tasks or monitoring general I/O points. Industrial systems use PLCs, which also come in a range of sizes and intended applications. The differences between these controllers can be discussed in terms of form factor and physical robustness, I/O type and capacity, and processor programming capability and flexibility.
Performance, flexibility and higher cost characterize PLC systems
The difference between PLC and DDC programs is essentially one of flexibility. The programming functions in a PLC are more numerous and powerful. There is a richer instruction set for math, logic and bit manipulation. Many PLCs allow encapsulation of instructions to create user-defined function blocks. This is a powerful tool that sophisticated users leverage to create simple, re-usable code. These differences allow creation of more sophisticated and powerful programs. Finally, modification of PLC programs can be done "on-line," which means the controllers do not need to be stopped if the program needs to be changed.
The two types of systems conceptually can look very similar. The distinction, in a word, is performance. Industrial systems are designed for "real-time" control. Like a DDC, a PLC program looks at sensor data input, performs logic or calculations and writes outputs. However, the speed of processing and communication in PLC systems allows inputs to be read from anywhere in the system, logic solved, and outputs to be written to anywhere else in the system in real-time. Scan rates for PLCs, even in large programs with distributed I/O, are generally measured in milliseconds. DDCs have program execution times measured in seconds.
There is a cost premium for industrial control systems. A rule of thumb for control systems is this: Industrial controls total installed cost is approximately $3000/point. Commercial systems cost approximately $2000/point. For reference, a recent data center project was completed with 500 I/O points. This represents a difference of $1.5M versus $1M. This estimate does not take into account the difference in maintenance and service contract costs (which is typically higher for commercial controls) but is a reasonable idea of the difference in up-front costs.
Owners and system designers should not expect to achieve industrial control system performance on a commercial control system budget. But consider: The control system represents just 1-2% of the total facility cost. With today's ever more demanding environments, it pays to consider the long-term value represented by the increased performance, flexibility and reliability of PLC systems.
About us
Strategic Media Asia (SMA) is one of the approved CPD course providers of the Chartered Institution of Building Services Engineers (CIBSE) UK. The team exists to provide an interactive environment and opportunities for members of ICT industry and facilities' engineers to exchange professional views and experience.
SMA connects IT, Facilities and Design. For the Data Center Consideration Series, please visit
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression,
(6) Meet Me Rooms,
(7) UPS Selection, and
(8) Raised Floor
All topics focus on key components and give technical advice and recommendations for designing a data center and critical facilities.
SMA connects IT, Facilities and Design. For the Data Center Consideration Series, please visit
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression,
(6) Meet Me Rooms,
(7) UPS Selection, and
(8) Raised Floor
All topics focus on key components and give technical advice and recommendations for designing a data center and critical facilities.
Thursday, December 1, 2016
Project Management for Mission-Critical Facilities from Design to Commissioning
2-day Advanced Training in Project Management for Mission-Critical Facilities from Design to Commissioning
Building, upgrading or relocating new data centers / mission-critical facilities requires extensive coordination. Project management team shall ensure all components come together smoothly. It is typically fast track from design and planning to testing and commissioning.
Further to the comprehensive training in electrical and air conditioning systems design for data center and mission-critical infrastructure, we are going to introduce a specialized course which highlights the oversights required by a project management team who directs the manufacturing, the outfitting and the preparation for a data center / computer room while simultaneously oversees site work, infrastructure for facility, utility installation and facilitate IT installations.
This is an advanced 2-day training details about how to structure the project management activities with a common language (for data center and mission-critical purposes), avoid cost increment, responsibility gaps and duplication of effort and achieve an efficient process with a predictable outcome.
Most importantly, the course outlines how to meet the project goal and SLA (Service Level Agreement) before, during and after completion of the project defined by the owner.
Day 1
- Reviewing the Project Management Basics
- Managing a Project on Time, Cost and Quality
- Contract Management for Data Center Design and Build
- Roles and Responsibilities
- Liaising with Clients (Facility Owners, Project Owners, etc.)
- Liaising with Stakeholders
- Liaising with Design Consultants / Architect
Day 2
- Managing Facilities / Services Suppliers
- Managing Contractors
- Assessing the Project Progression and Status Meetings
- Conflicts Management
- Change Management and Accommodation
- Project Handover, Testing and Commissioning
- Cases Study
For the course information (date, time, venue and the trainer profile), please visit www.stmedia-asia.com/trainings.html OR www.stmedia-asia.com/newsletter_6.html.
About the Course Organizer
Strategic Media Asia (SMA) is one of the approved CPD course providers of the Chartered Institution of Building Services Engineers (CIBSE) UK. The team exists to provide an interactive environment and opportunities for members of ICT industry and facilities' engineers to exchange professional views and experience.
SMA connects IT, Facilities and Design. For the Data Center Consideration Series, please visit
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression,
(6) Meet Me Rooms,
(7) UPS Selection, and
(8) Raised Floor
All topics focus on key components and give technical advice and recommendations for designing a data center and critical facilities.
SMA connects IT, Facilities and Design. For the Data Center Consideration Series, please visit
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression,
(6) Meet Me Rooms,
(7) UPS Selection, and
(8) Raised Floor
All topics focus on key components and give technical advice and recommendations for designing a data center and critical facilities.
Get ready to Data Center Industry Boom
This article is extracted from SCMP (South China Morning Post) published on 2 May, 2016 for the data center industry and development in Asia Pacific - "Hong Kong needs to be equipped for data centre industry boom". For details, please refer to www.scmp.com.
Today, many of our business and leisure activities are closely tied to technology and data, be it mobile apps and cloud storage or banking transactions and big data analytics. None of these would be possible without the back-end support of data centers.
Marcos Chan is head of research, Hong Kong, southern China and Taiwan at CBRE
Today, many of our business and leisure activities are closely tied to technology and data, be it mobile apps and cloud storage or banking transactions and big data analytics. None of these would be possible without the back-end support of data centers.
The data center industry is therefore rapidly emerging in Hong Kong and globally as one of the fastest growing sectors, with demand for data center space from technology firms, telecom service providers, financial institutions and even small and medium enterprises (SMEs).
The capital investment required for the data center sector is significant, and cities are eager to attract users to their market by offering suitable space. Hong Kong is no exception, with significant effort in the past few years devoted to capturing and nurturing these economy-boosting opportunities. However, the chronic shortage of space across all real estate sectors is a glaring barrier to Hong Kong’s hope to further develop as a data center hub.
Despite the limited land supply, Hong Kong is still being targeted by international investors, developers and occupiers due to its role as a gateway to China. Keppel T&T has recently announced a plan to co-develop an international carrier exchange in Hong Kong with PCCW Global. The 1,000 sq m telecommunications center is expected to be completed by the end of 2016.
Mainland China players have also been active. China Mobile is operating a large high-tier data center in Tsuen Kwan O industrial estate while China Unicom is due to open its flagship data center close by. This estate is home to the bulk of high-tier data centers in Hong Kong.
The space requirements for data centers vary with respect to the scale of data to be stored and the specification of the devices. Nonetheless, big and small players find it equally difficult to secure suitable space due to low vacancy and a lack of available sites.
The number of purpose-built data centers in Hong Kong has increased substantially over the past few years, growing from 3.3 million sq ft in 2012 to 6 million sq ft as of the end of 2015. While this has been a challenge to some operators due to competition of new business, for the most part, take-up has been solid, particularly from Chinese e-commerce players.
However, with the expected accelerated growth of the industry in the mid-long run, supply in Hong Kong will not be sufficient without more proactive policy initiatives. If the momentum of growth continues, Hong Kong will be in danger of losing opportunities to other markets in Asia, such as Singapore, Taiwan and Japan.
It is encouraging to see that the government has recognised the urgent demand for data center space. Efforts to set up the Data Center Facilitation Unit some years back was greeted with praise from industry players. The government has also allocated three commercial sites in Tseung Kwan O for the establishment of high-tier data centers.
While the Industrial Revitalisation Scheme terminated last month as planned, the government announced the extension of the concessionary scheme for data center developments, which had originally been scheduled to finish at the end of the first quarter of 2016. The approval time for waiver applications has also been shortened to around two weeks.
The government’s continued promotion of data center development is welcome news for this emerging segment. Converting industrial buildings to high-tier data centers can potentially unlock property values through increased rents.
Goodman, one of the largest holders of an industrial property in Hong Kong, recently disposed a 260,000 sq ft warehouse in Kwai Chung. The building has been mostly converted to a data center based on the strong interest from many willing buyers for the asset, particularly international funds. In some cases, converted buildings have recorded rents in excess of 50 per cent higher than those achieved prior to conversion.
The feasibility of conversion depends on building specifications. Unlike other industrial users, the technical requirement of a data center is a lot more complicated. The electricity consumption of a modern data center is typically some 20 times higher than that of an office of the same size. The ceiling height has to be higher for cable trays to be installed overhead or inside the raised floor. Most available spaces tend to be too large or in proximity to certain hazards, such as petrol stations or chemical storage, which again rules them out for conversion.
The vertical nature of Hong Kong industrial buildings is also a challenge as modern data center development tends to prefer large floor plates and significant common areas to instal equipment.
Based on their highly specific and detailed requirements, many investors and operators prefer to construct their own high-end data centers given the existing industrial buildings rarely meet their needs. To date, only one new data center of 90,000 sq ft has been built following a lease modification.
That being said, our research indicates that over 800,000 sq ft of industrial buildings has been converted from warehouse to data center in the past five years, with big names like Equinix, iAdvantage, PCCW and HGC leading the way. With more innovative designs and greater flexibility in terms of base building requirements, we still foresee further successful cases of conversion, particularly now with general warehouse rates stabilising.
Marcos Chan is head of research, Hong Kong, southern China and Taiwan at CBRE
Wednesday, November 2, 2016
Critical Facilities and Data Center Design Peer Review
Validate Your Critical Facilities and Data Center Design
We understand you have been working with an experienced project engineers and architect to complete the construction documents and engineering specifications for your new mission-critical facilities / data centers. Special consideration has been placed in the development of these documents to ensure a successful construction phase.
Since your mission-critical facilities are complex and fundamentally different from other types of construction projects (with own set of standards, requirements, etc.), one tool to assist making informed judgement is the Design Stage Peer Review - an important and often overlooked validation of the design before construction commences.
Click to Enlarge
Our Peer Review is an independent technical analysis of the design drawings and specifications conducted to identify potential deficiencies, at the design stage or bidding stage, where error on paper are simple to rectify correct.
Further to the Data Center & Critical Facilities Design Courses, our professional team can help you to confirm the important items such as power, cooling, loads / capacities, redundancies and safety have been properly and optimally engineered.
Peer Review Findings and Return
After your review is completed, our professional team will present you with a detailed findings, including
For additional information and the peer review findings, please visit www.stmedia-asia.com/critical-facilities-design-peer-review.html .
In addition to the peer review, you are cordially invited to visit our Data Center Design Consideration Series -
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression,
(6) Meet Me Rooms,
(7) UPS Selection, and
(8) Raised Floor
All topics focus on key components and give technical advice and recommendations for designing a data center and critical facilities.
We understand you have been working with an experienced project engineers and architect to complete the construction documents and engineering specifications for your new mission-critical facilities / data centers. Special consideration has been placed in the development of these documents to ensure a successful construction phase.
Since your mission-critical facilities are complex and fundamentally different from other types of construction projects (with own set of standards, requirements, etc.), one tool to assist making informed judgement is the Design Stage Peer Review - an important and often overlooked validation of the design before construction commences.
Our Peer Review is an independent technical analysis of the design drawings and specifications conducted to identify potential deficiencies, at the design stage or bidding stage, where error on paper are simple to rectify correct.
- Consult with our professional team with Chartered Engineer (CEng) status who have more than 20 years experience in critical facilities design, data center projects and building services engineering
- Ensure that your design & build project documents and engineering specifications fulfill your ultimate needs and requirements
- Discover all potential single points of failure and cost and efficiency improvements
Further to the Data Center & Critical Facilities Design Courses, our professional team can help you to confirm the important items such as power, cooling, loads / capacities, redundancies and safety have been properly and optimally engineered.
Peer Review Findings and Return
After your review is completed, our professional team will present you with a detailed findings, including
- Items of concern & improvement
- Rationale for questioning the original design / infrastructure items
- Potential capital and operating cost savings
For additional information and the peer review findings, please visit www.stmedia-asia.com/critical-facilities-design-peer-review.html .
In addition to the peer review, you are cordially invited to visit our Data Center Design Consideration Series -
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression,
(6) Meet Me Rooms,
(7) UPS Selection, and
(8) Raised Floor
All topics focus on key components and give technical advice and recommendations for designing a data center and critical facilities.
Wednesday, October 5, 2016
Register Today for Further Learning in Critical Facilities and Electrical System Design (May / June 2017)
Data Center Facilities Design and Infrastructure
Engineering
(18 - 19 May 2017, approved CPD course by CIBSE
UK)
The 2-day CPD is designed for Building Services Engineers,
Facilities / Data Center Managers, IT Management, etc. to enrich and update the
knowledge in critical facilities and data centers design & build. It is more
than an introductory session for data center and infrastructure:
- IT strategy
- Cabinet layout
- Raised floor system
- Cabinet layout
- Raised floor system
- Data center network and structure
- Telecommunication backbones, redundancy, sizing and planning
- Fiber and optical system design
- Fiber and optical cable components
- Copper cabling components
- Copper system design and high speed ethernet
- Cable distribution, layout and management
- Earthing / grounding and bounding
- Telecommunication backbones, redundancy, sizing and planning
- Fiber and optical system design
- Fiber and optical cable components
- Copper cabling components
- Copper system design and high speed ethernet
- Cable distribution, layout and management
- Earthing / grounding and bounding
- Power (1) – high / low voltage system, switch system,
etc.
- Power (2) – UPS, transformers, fuel tanks, generators,
etc.
- Cooling (1) – cooling topology, hot / cold aisle, etc.
- Cooling (1) – cooling topology, hot / cold aisle, etc.
- Cooling (2) – chiller, CRAC, cooling towers,
etc.
- Environmental management system
- Physical security
- Fire protection system
- Physical security
- Fire protection system
Date: 18 - 19 May 2017 (Thursday -
Friday)
Time: 10:00 – 17:30
Venue: Ground Floor, Innocentre, 72 Tat Chee Road, Kowloon Tong, Hong Kong
Fee: Special rate for CIBSE / HKIE all membership
classes
For details, please refer to
http://www.stmedia-asia.com/newsletter_6.html.
(15 - 16 June 2017,
approved CPD course by CIBSE
UK)
This is an advanced learning for mission critical facilities
which have particular power requirements that significantly impact how they are
designed and operated. You will gain insight into the critical supply system,
from power components to distributions and efficiency; from power requirements
to sizing, design, testing and commissioning:
-- Concept on primary supply and secondary supply
-- Power flow in mission critical supply system
-- Features of major equipment for critical
supply
> Uninterrupted power supply and power
storage
> Backup generator
> Automatic transfer switch
> Static transfer switch
> Isolation transformer
-- Efficiency assessment
-- Power quality review
-- Configuration diagram of critical supply (N+1 / 2N) design
& analysis
-- Review of cable sizing to incorporate harmonics
content
-- Earthing system design
-- Testing and commissioning requirements
-- Brief of Systems Merging Appraisal Test (SMAT)
Date: 15 - 16 June 2017 (Thursday -
Friday)
Time: 10:00 – 17:30
Venue: 14/F, Hip Shing Hong Kowloon Centre, 192-194 Nathan Road, Jordan, Hong Kong
-
Accessible from the Austin Road (Exit D, Jordan Station)
Fee: Special rate for CIBSE / HKIE all membership
classes
For details, please refer to
http://www.stmedia-asia.com/newsletter_6.html.
Enrollment & Registration
Kindly complete and return an Application Form (attached) together
with a crossed cheque made payable to “Strategic Media Asia Limited” -
Room 403, 4th Floor, Dominion Centre, 43 - 59 Queen's Road East, Hong
Kong.
About the Organizer
Strategic Media Asia Limited (SMA) is one of the approved CPD course
providers of the Chartered Institution of Building Services Engineers (CIBSE)
UK. For details, please visit www.stmedia-asia.com/about.html or
http://green-data.blogspot.com (Knowledge Blog).
Friday, September 30, 2016
Earthing & Grounding for UPS Systems
Power requirements for data centers and other mission-critical facilities continue to grow. While specific requirements of a facility's power distribution depend on the nature of its critical activities — and its anticipated future growth — most rely on large-scale uninterruptible power supply (UPS) systems. These systems, in turn, depend on effective grounding.
The Nature of Power
Before addressing the issue of effective grounding of UPS systems, it is worthwhile, first, to consider the levels of power reliability that characterize electrical systems for one type of mission-critical facility: the data center.
The Uptime Institute provides a tier system of classifications and certification for reliability of mechanical and electrical systems in data centers. There are four tiers: I, II, III and IV. Most data centers have relatively similar components, typically designed to meet Tier-IV requirements — but constructed for Tier-III capacity.
Tier IV design, according to the Uptime Institute, is 2N electrical distribution , which means that power is distributed to critical loads via two different — and redundant — paths. Loss of one feeder anywhere in the distribution will not disrupt power to the critical load. To meet the Tier-IV design, equipment must be rated for dual-cord, dual-input configuration. Power is available on both cords, but only one is utilized. Upon loss of power to one cord, the load transfers to the second cord seamlessly.
Most data centers, however, are not constructed to Tier-IV specifications, which can be extremely cost-prohibitive. Instead, data centers and mission-critical facilities are designed and constructed to Tier-III specifications. Tier III employs N+1 redundancy in the service, UPS modules, mechanical systems and concurrent maintenance systems. A simplified Tier-III single line is illustrated below.
Single or dual medium-voltage power is brought to the facility and transformed to low voltage for distribution. Generators are installed to provide 100% backup so that prolonged loss of utility power will have no impact on data-center operation. Transfer between the utility power and generators can be at low or medium voltage.
Loads are segregated into three categories of power: HVAC, critical and house. Each category may have multiple double-ended substations, depending on the load requirements. The HVAC substations provide power to all mechanical equipment associated with cooling the facility. The house-power double-ended substation provides power to all non-critical spaces, such as administration, support spaces and lighting. The critical-power double-ended substation that is protected by a UPS provides conditioned power to the critical components of the data centers — the servers, direct-access storage drives and disk storage.
Most power supplied to a data center is conditioned power with stored energy as reserve. Consequently, the UPS requirements are very large in magnitude, ranging into the megawatts. This creates a distribution nightmare, considering that most readily-available single-module UPS are at most rated at 800 kilovolt-amperes (kVA). To develop high capacity output of UPS power, single-module UPS systems are installed in parallel. As many as seven modules may be installed in parallel to increase the capacity of the UPS systems.
Static-Switch Bypass
A large UPS system is typically provided with a static-switch bypass . If the UPS modules fail, the critical load will transfer to this bypass. Some designers also provide a wraparound maintenance bypass to the static switch, so as to isolate the UPS modules and static-switch bypass. However, the maintenance bypass provides unconditioned power to the critical load—with no stored energy reserve as backup.
In the event of a power loss—even for just milliseconds—the batteries associated with UPS modules provide the power to maintain continuity. If there is loss of power to the UPS module, the batteries will continue to provide power until their capacity is depleted and a low-voltage condition occurs. At that time the static bypass will transfer to a secondary source, if available and within voltage tolerances.
The Importance of Grounding
Tier-III installations are typically designed with N+1 modules — the total number of modules necessary to meet the load requirements, plus one additional module for redundancy — and are provided with a static-switch bypass to transfer power in the event of failure of the UPS modules. It is important that, if there is a problem with the UPS modules, the critical load transfers from UPS modules to static-switch bypass. In order for the transfer to occur, a good solid ground must be established.
Typically a ground wire is run, along with phase conductors, from the service substation to static-switch bypass. If the termination is not installed or maintained properly, an impedance may develop between the two reference grounds and cause increased voltage in the circuit phase conductors, pushing the tolerance range within which the bypass will transfer and causing the static switch to fail to transfer, even though a good secondary source is available.
It is vital that the ground is properly connected at the static-bypass cabinet, and zero potential is maintained at the neutral to ground bond at the static bypass and at the double-ended substation. Otherwise, the UPS system could fail to transfer to static bypass.
Most manufacturers recommend that a neutral and ground conductor be run from output isolation transformers from the UPS modules to static-switch bypass, where they will be connected to their respective bus. The neutral and ground should be bonded — in accordance with National Electrical Code (NEC) requirements — because the output of the transformers are separately derived systems. A ground is also run from the neutral and ground bond of double-ended substations to the neutral ground bond at the static switch. Because there is no transformer at the static switch to help establish a separately derived system, an electrically common point is established between the double-ended substation and static-switch bypass. This configuration, however, has the potential for creating problems with load continuity.
A major reason why data centers go off-line is human error. Where there is human intervention, there are potential problems with ground faults. Ground faults can be very difficult to predict and control and can cause havoc in large multi-module UPS systems. Smaller UPS systems — less than 225 kVA — have output isolation transformers with internal static-switch bypass. The output of the static-switch bypass and UPS is routed through a common output isolation transformer. This in turn protects the UPS system from ground faults and transients that may develop at the critical load.
On the other hand, UPS modules in larger systems typically have an output isolation transformer, but the static-switch bypass does not. If a ground fault occurs downstream from the UPS system but upstream from the power distribution unit, the fault will travel back to the source: the double-ended substation. To reach the source and help clear the fault, the load will transfer to the static-switch bypass. This can cause the main circuit breakers at the double-ended substation to trip on ground fault and take the critical load off-line. Because the breakers at the output of the static bypass and at the double-ended substations are approximately the same size, and are significantly larger than the minimum 1,000-amp setting allowed by NEC, it is possible that a facility's entire system will lose power.
Grounded Solutions
There are various solutions currently employed to help mitigate potential problems of ground faults and impedance between the two separately-derived grounds — at the static-switch bypass and the double-ended substation. A transformer can be installed at the input of the static switch so that the neutral-to-ground bond established at the static switch will come from a separate source.
This approach, however, can be costly, and the required transformer can be extremely large. Also, the transformer will contribute to inrush and additional impedance. But it may be of benefit in limiting maximum fault current available downstream from the UPS system.
Another popular solution is to implement high-resistance grounding. HRG is not commonly used on a low-voltage system. The intent is to introduce a resistor to limit the current that flows at the neutral and ground bond, where the ground-fault current transformer monitors fault current.
This method is difficult to implement, because it requires calculation of system capacitance and requires fine-tuning of the resistor in the field. It is also dangerous, requiring highly trained personnel to monitor the ground-fault alarm and then trace through the distribution system and isolate the source of the fault. Human error is already a major source of data center power loss. It does not seem a good idea to introduce yet more human intervention to trace and isolate fault current. In addition, the setting for the ground-fault sensors needs to be revisited any time a significant load is added that may change the system capacitance.
It may seem that the solution is as simple as providing a zone-interlocking relay-protection scheme from the double-ended substation down to UPS static switch, and a distribution switchboard downstream of the UPS system. But ground-fault coordination is very difficult to design and install. Also, the static switch will transfer from the UPS module at a much faster speed than any fast-acting relay.
It is essential to maintain operation of mission-critical facilities and data centers with a reliable distribution scheme. The design engineer should coordinate with the client to establish design parameters based on economics and level of required reliability for mission-critical facilities and ensure that the final product is a facility that meets all of the client's long-term operational requirement.
About the Blogger
Strategic Media Asia (SMA) is one of the approved CPD course providers of the Chartered Institution of Building Services Engineers (CIBSE) UK. The team exits to provide an interactive environment and opportunities for members of ICT industry and facilities' engineers to exchange professional views and experience.
SMA connects IT, Facilities and Design. For the Data Center Consideration Series, please visit
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression,
(6) Meet Me Rooms, and
(7) UPS Selection
The Nature of Power
Before addressing the issue of effective grounding of UPS systems, it is worthwhile, first, to consider the levels of power reliability that characterize electrical systems for one type of mission-critical facility: the data center.
The Uptime Institute provides a tier system of classifications and certification for reliability of mechanical and electrical systems in data centers. There are four tiers: I, II, III and IV. Most data centers have relatively similar components, typically designed to meet Tier-IV requirements — but constructed for Tier-III capacity.
Tier IV design, according to the Uptime Institute, is 2N electrical distribution , which means that power is distributed to critical loads via two different — and redundant — paths. Loss of one feeder anywhere in the distribution will not disrupt power to the critical load. To meet the Tier-IV design, equipment must be rated for dual-cord, dual-input configuration. Power is available on both cords, but only one is utilized. Upon loss of power to one cord, the load transfers to the second cord seamlessly.
Most data centers, however, are not constructed to Tier-IV specifications, which can be extremely cost-prohibitive. Instead, data centers and mission-critical facilities are designed and constructed to Tier-III specifications. Tier III employs N+1 redundancy in the service, UPS modules, mechanical systems and concurrent maintenance systems. A simplified Tier-III single line is illustrated below.
Single or dual medium-voltage power is brought to the facility and transformed to low voltage for distribution. Generators are installed to provide 100% backup so that prolonged loss of utility power will have no impact on data-center operation. Transfer between the utility power and generators can be at low or medium voltage.
Loads are segregated into three categories of power: HVAC, critical and house. Each category may have multiple double-ended substations, depending on the load requirements. The HVAC substations provide power to all mechanical equipment associated with cooling the facility. The house-power double-ended substation provides power to all non-critical spaces, such as administration, support spaces and lighting. The critical-power double-ended substation that is protected by a UPS provides conditioned power to the critical components of the data centers — the servers, direct-access storage drives and disk storage.
Most power supplied to a data center is conditioned power with stored energy as reserve. Consequently, the UPS requirements are very large in magnitude, ranging into the megawatts. This creates a distribution nightmare, considering that most readily-available single-module UPS are at most rated at 800 kilovolt-amperes (kVA). To develop high capacity output of UPS power, single-module UPS systems are installed in parallel. As many as seven modules may be installed in parallel to increase the capacity of the UPS systems.
Static-Switch Bypass
A large UPS system is typically provided with a static-switch bypass . If the UPS modules fail, the critical load will transfer to this bypass. Some designers also provide a wraparound maintenance bypass to the static switch, so as to isolate the UPS modules and static-switch bypass. However, the maintenance bypass provides unconditioned power to the critical load—with no stored energy reserve as backup.
In the event of a power loss—even for just milliseconds—the batteries associated with UPS modules provide the power to maintain continuity. If there is loss of power to the UPS module, the batteries will continue to provide power until their capacity is depleted and a low-voltage condition occurs. At that time the static bypass will transfer to a secondary source, if available and within voltage tolerances.
The Importance of Grounding
Tier-III installations are typically designed with N+1 modules — the total number of modules necessary to meet the load requirements, plus one additional module for redundancy — and are provided with a static-switch bypass to transfer power in the event of failure of the UPS modules. It is important that, if there is a problem with the UPS modules, the critical load transfers from UPS modules to static-switch bypass. In order for the transfer to occur, a good solid ground must be established.
Typically a ground wire is run, along with phase conductors, from the service substation to static-switch bypass. If the termination is not installed or maintained properly, an impedance may develop between the two reference grounds and cause increased voltage in the circuit phase conductors, pushing the tolerance range within which the bypass will transfer and causing the static switch to fail to transfer, even though a good secondary source is available.
It is vital that the ground is properly connected at the static-bypass cabinet, and zero potential is maintained at the neutral to ground bond at the static bypass and at the double-ended substation. Otherwise, the UPS system could fail to transfer to static bypass.
Most manufacturers recommend that a neutral and ground conductor be run from output isolation transformers from the UPS modules to static-switch bypass, where they will be connected to their respective bus. The neutral and ground should be bonded — in accordance with National Electrical Code (NEC) requirements — because the output of the transformers are separately derived systems. A ground is also run from the neutral and ground bond of double-ended substations to the neutral ground bond at the static switch. Because there is no transformer at the static switch to help establish a separately derived system, an electrically common point is established between the double-ended substation and static-switch bypass. This configuration, however, has the potential for creating problems with load continuity.
A major reason why data centers go off-line is human error. Where there is human intervention, there are potential problems with ground faults. Ground faults can be very difficult to predict and control and can cause havoc in large multi-module UPS systems. Smaller UPS systems — less than 225 kVA — have output isolation transformers with internal static-switch bypass. The output of the static-switch bypass and UPS is routed through a common output isolation transformer. This in turn protects the UPS system from ground faults and transients that may develop at the critical load.
On the other hand, UPS modules in larger systems typically have an output isolation transformer, but the static-switch bypass does not. If a ground fault occurs downstream from the UPS system but upstream from the power distribution unit, the fault will travel back to the source: the double-ended substation. To reach the source and help clear the fault, the load will transfer to the static-switch bypass. This can cause the main circuit breakers at the double-ended substation to trip on ground fault and take the critical load off-line. Because the breakers at the output of the static bypass and at the double-ended substations are approximately the same size, and are significantly larger than the minimum 1,000-amp setting allowed by NEC, it is possible that a facility's entire system will lose power.
Grounded Solutions
There are various solutions currently employed to help mitigate potential problems of ground faults and impedance between the two separately-derived grounds — at the static-switch bypass and the double-ended substation. A transformer can be installed at the input of the static switch so that the neutral-to-ground bond established at the static switch will come from a separate source.
This approach, however, can be costly, and the required transformer can be extremely large. Also, the transformer will contribute to inrush and additional impedance. But it may be of benefit in limiting maximum fault current available downstream from the UPS system.
Another popular solution is to implement high-resistance grounding. HRG is not commonly used on a low-voltage system. The intent is to introduce a resistor to limit the current that flows at the neutral and ground bond, where the ground-fault current transformer monitors fault current.
This method is difficult to implement, because it requires calculation of system capacitance and requires fine-tuning of the resistor in the field. It is also dangerous, requiring highly trained personnel to monitor the ground-fault alarm and then trace through the distribution system and isolate the source of the fault. Human error is already a major source of data center power loss. It does not seem a good idea to introduce yet more human intervention to trace and isolate fault current. In addition, the setting for the ground-fault sensors needs to be revisited any time a significant load is added that may change the system capacitance.
It may seem that the solution is as simple as providing a zone-interlocking relay-protection scheme from the double-ended substation down to UPS static switch, and a distribution switchboard downstream of the UPS system. But ground-fault coordination is very difficult to design and install. Also, the static switch will transfer from the UPS module at a much faster speed than any fast-acting relay.
It is essential to maintain operation of mission-critical facilities and data centers with a reliable distribution scheme. The design engineer should coordinate with the client to establish design parameters based on economics and level of required reliability for mission-critical facilities and ensure that the final product is a facility that meets all of the client's long-term operational requirement.
About the Blogger
Strategic Media Asia (SMA) is one of the approved CPD course providers of the Chartered Institution of Building Services Engineers (CIBSE) UK. The team exits to provide an interactive environment and opportunities for members of ICT industry and facilities' engineers to exchange professional views and experience.
SMA connects IT, Facilities and Design. For the Data Center Consideration Series, please visit
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression,
(6) Meet Me Rooms, and
(7) UPS Selection
Tuesday, August 23, 2016
Data Center Design Consideration: Raised Floor
Despite all many studies claiming raised floor is no longer necessary in data center design, it is still present in the vast majority of data centers or computer rooms. We are going to address several important factors to consider (structural strength, airflow and leakage if you’re using it for cooling and static dissipation) when choosing and installing a raised floor in your critical facilities.
(1) Cooling, Floor Tightness and Airflow
The overhead air ducts in slab (non-raised-floor) designs tends to be limited. When chilled air is delivered under a raised floor and preforming "spot cooled", simply rearranging perforated floor tiles is enough to change the cooling distribution. Also, the plenum under a raised floor offers room for cabling (usually power cables) that doesn’t require the kind of added labor and infrastructure that overhead cabling calls for—cable racks or baskets.
However, massive underfloor air delivery is often problematic and accounts for many of today’s data center cooling problems. Air turbulence can cause uneven air pressure and spotty air delivery. Raised floor heights of 18 inches, 24 inches and even 30 inches or more are needed, and few buildings have the slab-to-slab clearance for that.
On the other hand, the plenum under a raised floor can be subject to obstructions (particularly cabling) and other inefficiencies that hamper cooling. The general consensus is that a raised-floor design cannot meet the cooling needs of higher-density deployments (perhaps in the range of 8–10 kW per rack and up).
There are also code considerations with underfloor air delivery. A data center that uses the raised floor space for cooling may be required by Article 645 of the National Electrical Code to also have an emergency power off button next to exit doors. However, data center owners can avoid this requirement in a number of ways, including not using a raised floor at all.
It’s important that the panels must be square and tight to minimize air leakage, and every edge next to walls and air conditioners and around pipe penetrations must be sealed.
One important thing to remember is that while adjustable dampers on tiles can be very valuable for balancing air delivery among cabinets, adding a damper to any tile effectively reduces its “open percentage” and its airflow, even when the damper is fully open.
For example, a Tate 25% open perforated panel with no damper will pass 746 cfm of air at 0.1 inches of static pressure under the floor. Simply adding a damper and leaving it fully open reduces this to 515 cfm. That’s the equivalent of only 17.4% open. With a 56% open grate tile, the difference is even more dramatic - 2,096 cfm with no damper and only 1,128 cfm with a fully open damper (a reduction from 56% to 30.5% open).
An examination of manufacturers’ airflow characteristics and images, with and without dampers, can quickly reveal this effect.
(2) Ramp
Ramps also take up a lot of space. The Americans with Disabilities Act (ADA) requires at least a 1:12 slope, which means a ramp must have one foot of length for every inch of floor height. In new buildings, a depressed slab will keep the raised floor even with the surrounding corridors, but that takes a special structure.
(3) Building Floor Levelness & Load Capacity
Most building floor slabs are uneven, and they’re designed to flex as weight is added. Raised floors use adjustable pedestals that result in a very level floor surface without needing to level the slab. This makes it easier to align rows of cabinets, as well as to roll equipment into place.
However, the capacity of the floor may become a structural concern if the data center grows faster than originally planned or new, heavier equipment is deployed beyond what the company had intended at construction time. Furthermore, seismic activity poses a danger to raised floors beyond what slabs face.
(4) Evaluation of Raised Floor and Panels
Modern raised floors for data centers are usually made of cement-filled steel or cast aluminum. For easy access, we need “lay-in” panels that can be easily removed, rather than the screw-down type that are bolted to the pedestals at each corner. And because cabinets in today’s data centers are getting heavier, we need the strength and stability of a “bolted stringer” understructure, rather than panels that just self-lock to the pedestals at the corners.
Panels have historically been labeled and marketed for their “concentrated load” ratings. This is the maximum load that can be applied to the weakest one square inch of the tile without deforming it by more than a specified amount. But different manufacturers provide other ratings, including "uniform load" (the average weight per square foot the panel can support when weight is evenly distributed across its four square-foot surface), "yield point" (where the panel permanently deforms) and "ultimate load" (where the concentrated load actually causes the panel to collapse or break).
Either way, it is the concentrated load or design load, not the uniform load, we care about most, since cabinets usually sit on small leveling feet or casters, not solid full-size bases. Uniform load is meaningless in a data center.
So what strength do we really need? There is a tendency to use the highest load rating simply because cabinets are getting heavier, but is that really necessary?
Stronger floors usually weigh more which, depending on the building slab rating, may reduce the useful cabinet weight that a raised floor can support. This is a reason to compare the weights of similarly rated panels, and is also why some data center designers advocate cast aluminum floors despite their much higher cost.
Although some cabinets may weigh 2,500 pounds or more, others will probably weigh less. If you only have several heavy cabinets, extra pedestals under them may be fine, but if you have too many heavy cabinets, extra pedestals could be forgotten with resulting damage to the floor.
Another important rating is "rolling load,” because we need to get cabinets across the floor and into position. One suggestion is to install stronger panels in your delivery paths, so the panels with different strengths are interchangeable in the floor structure.
It is important to carefully read manufacturers' data sheets. Since not all floor manufacturers test and specify the same way, it is also good to know how tests were run to compare ratings and determine whether they were done by independent testing labs.
(5) Raised Floor's Surface Material
We should also be concerned about the anti-static characteristic of a floor material. There are two types of floors that are often confused: conductive and static dissipative. Technical definitions classify static dissipative as a particular type of conductive floor, but manufacturers of raised floor products for data centers and clean rooms will generally identify them separately. Conductive flooring is typically used in clean rooms, where people are handling microchips. This type of flooring has a lower resistance to ground than static dissipative products. Conductive flooring is not needed, nor recommended, for data centers.
In data centers, we need static dissipative floors that will conduct static charges of more than 100 volts away from our bodies and clothing and through the floor tile to the ground. This requires a surface material to have the necessary static dissipative qualities, and a grounded understructure that prevents the generation of static electricity. The understructure should also conduct electrical charges away so that they are not harmful to our equipment. Ratings should be based on the resistance from any point on the panel surface to the pedestal, which also needs to be properly grounded to work.
The surface material on a computer room floor should be a zero-maintenance product. It should never need to be waxed or buffed, as wax accumulates dirt and must be removed with liquids, and buffing creates dust. The material must also be hard enough for equipment to roll over and sit on without denting or deforming. This rules out rubber and vinyl materials. And, of course, carpet of any kind should never be used, as it both creates and traps particulates. The most commonly used surface covering in data centers is known as high-pressure laminate (HPL). It can be made with the necessary static dissipative qualities and also has the hardness and maintenance characteristics needed. It should also be made so the laminate edges are not easily damaged.
(6) Budget, Cleaning and Maintenance
The area under a raised floor is a dirt and debris trap, but cleaning can be problematic. Furthermore, other problems such as addressing (and even identifying) moisture and breaches in walls plague this approach. Also, since out of sight is out of mind, the temptation to leave unused cabling and other junk in the plenum may be irresistible, particularly in a time-pressed environment, thus exacerbating the problem.
For data centers operated more than 5 - 10 years, replacing individual floor panels with special size may be required due to wear out and daily operations. Minimum order quantity (MOQ) requirement specifies the lowest quantity of the raised floor panels that a supplier is willing to sell. Spare panels are usually recommended and prepared in the data center design stage or extra budget may be required to settle the tailor-made issue.
(7) Security
This concern is particularly acute in the co-location facilities that serve multiple customers. For security concern, some cabinets stored sensitive data for critical purposes are installed inside a cage unit (from ceiling to raised floor, from raised floor to concrete slab floor) which is separated from other cabinets in a data hall. It should be prepared for this kind of installation.
* The Performance Selection Chart and Air Flow information are provided by MUFLOOR ASIA COMPANY LIMITED (http://www.mufloor.com). For details, please contact the local distributor.
About the Blogger
Strategic Media Asia (SMA) is one of the approved CPD course providers of the Chartered Institution of Building Services Engineers (CIBSE) UK. The team exits to provide an interactive environment and opportunities for members of ICT industry and facilities' engineers to exchange professional views and experience.
SMA connects IT, Facilities and Design. For the Data Center Consideration Series, please visit
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression,
(6) Meet Me Rooms, and
(7) UPS Selection
(1) Cooling, Floor Tightness and Airflow
The overhead air ducts in slab (non-raised-floor) designs tends to be limited. When chilled air is delivered under a raised floor and preforming "spot cooled", simply rearranging perforated floor tiles is enough to change the cooling distribution. Also, the plenum under a raised floor offers room for cabling (usually power cables) that doesn’t require the kind of added labor and infrastructure that overhead cabling calls for—cable racks or baskets.
However, massive underfloor air delivery is often problematic and accounts for many of today’s data center cooling problems. Air turbulence can cause uneven air pressure and spotty air delivery. Raised floor heights of 18 inches, 24 inches and even 30 inches or more are needed, and few buildings have the slab-to-slab clearance for that.
On the other hand, the plenum under a raised floor can be subject to obstructions (particularly cabling) and other inefficiencies that hamper cooling. The general consensus is that a raised-floor design cannot meet the cooling needs of higher-density deployments (perhaps in the range of 8–10 kW per rack and up).
There are also code considerations with underfloor air delivery. A data center that uses the raised floor space for cooling may be required by Article 645 of the National Electrical Code to also have an emergency power off button next to exit doors. However, data center owners can avoid this requirement in a number of ways, including not using a raised floor at all.
It’s important that the panels must be square and tight to minimize air leakage, and every edge next to walls and air conditioners and around pipe penetrations must be sealed.
One important thing to remember is that while adjustable dampers on tiles can be very valuable for balancing air delivery among cabinets, adding a damper to any tile effectively reduces its “open percentage” and its airflow, even when the damper is fully open.
For example, a Tate 25% open perforated panel with no damper will pass 746 cfm of air at 0.1 inches of static pressure under the floor. Simply adding a damper and leaving it fully open reduces this to 515 cfm. That’s the equivalent of only 17.4% open. With a 56% open grate tile, the difference is even more dramatic - 2,096 cfm with no damper and only 1,128 cfm with a fully open damper (a reduction from 56% to 30.5% open).
An examination of manufacturers’ airflow characteristics and images, with and without dampers, can quickly reveal this effect.
(2) Ramp
Ramps also take up a lot of space. The Americans with Disabilities Act (ADA) requires at least a 1:12 slope, which means a ramp must have one foot of length for every inch of floor height. In new buildings, a depressed slab will keep the raised floor even with the surrounding corridors, but that takes a special structure.
(3) Building Floor Levelness & Load Capacity
Most building floor slabs are uneven, and they’re designed to flex as weight is added. Raised floors use adjustable pedestals that result in a very level floor surface without needing to level the slab. This makes it easier to align rows of cabinets, as well as to roll equipment into place.
However, the capacity of the floor may become a structural concern if the data center grows faster than originally planned or new, heavier equipment is deployed beyond what the company had intended at construction time. Furthermore, seismic activity poses a danger to raised floors beyond what slabs face.
(4) Evaluation of Raised Floor and Panels
Modern raised floors for data centers are usually made of cement-filled steel or cast aluminum. For easy access, we need “lay-in” panels that can be easily removed, rather than the screw-down type that are bolted to the pedestals at each corner. And because cabinets in today’s data centers are getting heavier, we need the strength and stability of a “bolted stringer” understructure, rather than panels that just self-lock to the pedestals at the corners.
Panels have historically been labeled and marketed for their “concentrated load” ratings. This is the maximum load that can be applied to the weakest one square inch of the tile without deforming it by more than a specified amount. But different manufacturers provide other ratings, including "uniform load" (the average weight per square foot the panel can support when weight is evenly distributed across its four square-foot surface), "yield point" (where the panel permanently deforms) and "ultimate load" (where the concentrated load actually causes the panel to collapse or break).
Either way, it is the concentrated load or design load, not the uniform load, we care about most, since cabinets usually sit on small leveling feet or casters, not solid full-size bases. Uniform load is meaningless in a data center.
So what strength do we really need? There is a tendency to use the highest load rating simply because cabinets are getting heavier, but is that really necessary?
Stronger floors usually weigh more which, depending on the building slab rating, may reduce the useful cabinet weight that a raised floor can support. This is a reason to compare the weights of similarly rated panels, and is also why some data center designers advocate cast aluminum floors despite their much higher cost.
Although some cabinets may weigh 2,500 pounds or more, others will probably weigh less. If you only have several heavy cabinets, extra pedestals under them may be fine, but if you have too many heavy cabinets, extra pedestals could be forgotten with resulting damage to the floor.
Another important rating is "rolling load,” because we need to get cabinets across the floor and into position. One suggestion is to install stronger panels in your delivery paths, so the panels with different strengths are interchangeable in the floor structure.
It is important to carefully read manufacturers' data sheets. Since not all floor manufacturers test and specify the same way, it is also good to know how tests were run to compare ratings and determine whether they were done by independent testing labs.
(5) Raised Floor's Surface Material
We should also be concerned about the anti-static characteristic of a floor material. There are two types of floors that are often confused: conductive and static dissipative. Technical definitions classify static dissipative as a particular type of conductive floor, but manufacturers of raised floor products for data centers and clean rooms will generally identify them separately. Conductive flooring is typically used in clean rooms, where people are handling microchips. This type of flooring has a lower resistance to ground than static dissipative products. Conductive flooring is not needed, nor recommended, for data centers.
In data centers, we need static dissipative floors that will conduct static charges of more than 100 volts away from our bodies and clothing and through the floor tile to the ground. This requires a surface material to have the necessary static dissipative qualities, and a grounded understructure that prevents the generation of static electricity. The understructure should also conduct electrical charges away so that they are not harmful to our equipment. Ratings should be based on the resistance from any point on the panel surface to the pedestal, which also needs to be properly grounded to work.
The surface material on a computer room floor should be a zero-maintenance product. It should never need to be waxed or buffed, as wax accumulates dirt and must be removed with liquids, and buffing creates dust. The material must also be hard enough for equipment to roll over and sit on without denting or deforming. This rules out rubber and vinyl materials. And, of course, carpet of any kind should never be used, as it both creates and traps particulates. The most commonly used surface covering in data centers is known as high-pressure laminate (HPL). It can be made with the necessary static dissipative qualities and also has the hardness and maintenance characteristics needed. It should also be made so the laminate edges are not easily damaged.
(6) Budget, Cleaning and Maintenance
The area under a raised floor is a dirt and debris trap, but cleaning can be problematic. Furthermore, other problems such as addressing (and even identifying) moisture and breaches in walls plague this approach. Also, since out of sight is out of mind, the temptation to leave unused cabling and other junk in the plenum may be irresistible, particularly in a time-pressed environment, thus exacerbating the problem.
For data centers operated more than 5 - 10 years, replacing individual floor panels with special size may be required due to wear out and daily operations. Minimum order quantity (MOQ) requirement specifies the lowest quantity of the raised floor panels that a supplier is willing to sell. Spare panels are usually recommended and prepared in the data center design stage or extra budget may be required to settle the tailor-made issue.
(7) Security
This concern is particularly acute in the co-location facilities that serve multiple customers. For security concern, some cabinets stored sensitive data for critical purposes are installed inside a cage unit (from ceiling to raised floor, from raised floor to concrete slab floor) which is separated from other cabinets in a data hall. It should be prepared for this kind of installation.
* The Performance Selection Chart and Air Flow information are provided by MUFLOOR ASIA COMPANY LIMITED (http://www.mufloor.com). For details, please contact the local distributor.
About the Blogger
Strategic Media Asia (SMA) is one of the approved CPD course providers of the Chartered Institution of Building Services Engineers (CIBSE) UK. The team exits to provide an interactive environment and opportunities for members of ICT industry and facilities' engineers to exchange professional views and experience.
SMA connects IT, Facilities and Design. For the Data Center Consideration Series, please visit
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression,
(6) Meet Me Rooms, and
(7) UPS Selection
Monday, July 18, 2016
Data Center Design Consideration: UPS
Uninterruptible Power Supply (UPS) is one of the key component of a data center. Understanding of the UPS technology is critical and important for the data center design.
Transformer-based or Transformer-free UPS?
There is growing interest in using transformer-free UPS modules in higher power, three-phase, mission-critical power backup applications (e.g., 200 kW to 5 MW). However, many organizations are unclear about which architecture — transformer-based or transformer-free — is best suited for a particular application.
With limited floor space and modular components in data centers, more flexible, smaller footprint UPS units are desirable by companies in the near future. On the one hand, the latest transformer-free systems offer better efficiency, a smaller footprint, and improved flexibility while providing high levels of availability. Driven by data center designer demand, most leading UPS suppliers offer both topologies. On the other hand, transformer-based UPS systems excel at providing the highest capacities and availability while simplifying external and internal voltage management and fault current control.
Currently, large transformer-free systems are constructed using modular building blocks that deliver high power in a lightweight, compact package. This modular design offers advantages when the timing of future load requirements is uncertain by allowing capacity to be more easily added as needed, either physically or via control settings. On the other hand, a modular design means higher component counts, which may result in lower unit mean time between failure (MTBF) and higher unit service rates.
For high-power enterprise data centers and other critical applications, a state-of-the-art transformer-based UPS still provides an edge in availability. Transformers within the UPS provide integrated fault management and galvanic isolation as well as greater compatibility with critical power distribution system requirements that should be considered when designing a high availability UPS system. Technology developments and configuration options allow the latest transformer-based designs to operate at higher efficiencies compared to previous designs, making them more comparable to the transformer-free models in terms of efficiency.
In general, 200 kW is a threshold below which the space, weight, and cost advantages of transformer-free UPS systems outweigh the robustness and higher capacity capabilities of transformer-based systems. The under-200 kW applications can benefit from the high efficiency and excellent input power conditioning through active components offered by transformer-free designs. In addition, the scalability of a modular transformer-free UPS can help avoid over-provisioning while maintaining operational efficiency.
FACTORS TO CONSIDER
Both approaches use a double-conversion process (Figure Above) to provide power protection for mission-critical applications. The primary difference between the two technologies is in their respective use of transformers.
A transformer-based UPS may use a transformer before the rectifier and requires an isolation transformer after the inverter to derive the voltage being delivered to the critical load.
Transformer-free UPS designs use power and control electronics technologies to eliminate the need for an isolation transformer as an integral part of the inverter output section.
TRANSFORMER-BASED UPS DESIGN
Large systems are typically manufactured based on serviceable sub-assemblies and are available in discrete units rated up to 1,100 kVA. Key components of this design include:
TRANSFORMER-FREE UPS DESIGN
Transformer-free UPS topologies replace simple passive magnetic-voltage transformation functions with solid-state power electronics circuitry. Figure above shows a simplified block diagram of a transformer-free UPS design. There are a number of key differences listed below between this circuit and the unit depicted in the figure.
However, other external transformers may be required for AC-DC isolation purposes, safety reasons, AC voltage changes, or to provide power distribution flexibility. With the addition of external transformers, the overall facility weight and footprint totals may be higher than with a transformer-based UPS design with implications for end-to-end system efficiency. If transformers need to be added to a transformer-free unit to make it compatible with a facility, a transformer-based unit may be a better solution.
Transformer-free UPS topologies have emerged to meet the demand for more efficient, flexible, smaller footprint, lighter weight UPS systems. The price of these performance feature improvements has been the replacement of a few robust but physically large, passive components, such as transformers, inductors, and capacitors with functional power electronic equivalents packaged in field replaceable, modular sub-assemblies. It is reasonable to expect that the transformer-free units will have service call rates somewhat higher than their transformer-based counterparts.
TECHNICAL FEATURES AND PERFORMANCE DIFFERENCES
In choosing between transformer-based and transformer-free UPS solutions, a system designer should determine where transformers are best utilized and whether they should be internal and/or external to the UPS in view of physical and electrical distribution requirements and tradeoffs. It’s important to review the techniques and tradeoffs utilized in the various rectifier, DC energy storage, inverter, and static bypass functions of these two UPS designs for various UPS system performance functions including:
• Site planning and adaptability to change
• Reliability and availability
• Robustness
• DC energy storage system isolation
• Engine-generator interface
• UPS output interface considerations
• High resistance grounding
• Fault current management
• Arc flash energy
• Isolation
• Maintainability
• External components needed to complete the system design
• Total cost of ownership
• Capital expenses (CAPEX) and operating expenses (OPEX)
When considering the total cost of ownership for these two architectures, it is important to include both the initial upfront or CAPEX as well as the ongoing or OPEX to power, maintain and service various options.
Technological evolution is constantly impacting the relative efficiency of transformer-based and transformer-free solutions. After years of optimizing performance, transformer-based UPS systems have achieved a relatively flat efficiency curve from 30% to 80% loading where typical tier 3 and tier 4 data centers operate. The latest transformer-free designs also have very flat curves down to as low as 20% of capacity, and have efficiencies in the 95% to 96% range in double-conversion mode. A study of the whole system design is necessary to determine the relative efficiencies as the addition of transformers, and the efficiency of those transformers, will have an impact.
In summary, transformers, whether internal or external to the UPS, are necessary to establish circuit isolation and local neutral and grounding points, as well as to provide voltage transformation points. This facilitates, for example, the implementation of very high power density installations based on 600V distribution sources, subsequently stepped down to 208/120V for IT load applications. When transformers are utilized in conjunction with the UPS internal DC link, DC-to-AC output, and AC-to-DC input isolation can be provided, reducing or eliminating the risk of DC faults propagating upstream or downstream of the UPS.
About the Blogger
Strategic Media Asia (SMA) is one of the approved CPD course providers of the Chartered Institution of Building Services Engineers (CIBSE) UK. The team exits provide an interactive environment and opportunities for members of ICT industry and facilities' engineers to exchange professional views and experience.
SMA connects IT, Facilities and Design. For the Data Center Consideration Series, please visit
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression, and
(6) Meet Me Rooms
Transformer-based or Transformer-free UPS?
There is growing interest in using transformer-free UPS modules in higher power, three-phase, mission-critical power backup applications (e.g., 200 kW to 5 MW). However, many organizations are unclear about which architecture — transformer-based or transformer-free — is best suited for a particular application.
With limited floor space and modular components in data centers, more flexible, smaller footprint UPS units are desirable by companies in the near future. On the one hand, the latest transformer-free systems offer better efficiency, a smaller footprint, and improved flexibility while providing high levels of availability. Driven by data center designer demand, most leading UPS suppliers offer both topologies. On the other hand, transformer-based UPS systems excel at providing the highest capacities and availability while simplifying external and internal voltage management and fault current control.
Currently, large transformer-free systems are constructed using modular building blocks that deliver high power in a lightweight, compact package. This modular design offers advantages when the timing of future load requirements is uncertain by allowing capacity to be more easily added as needed, either physically or via control settings. On the other hand, a modular design means higher component counts, which may result in lower unit mean time between failure (MTBF) and higher unit service rates.
For high-power enterprise data centers and other critical applications, a state-of-the-art transformer-based UPS still provides an edge in availability. Transformers within the UPS provide integrated fault management and galvanic isolation as well as greater compatibility with critical power distribution system requirements that should be considered when designing a high availability UPS system. Technology developments and configuration options allow the latest transformer-based designs to operate at higher efficiencies compared to previous designs, making them more comparable to the transformer-free models in terms of efficiency.
In general, 200 kW is a threshold below which the space, weight, and cost advantages of transformer-free UPS systems outweigh the robustness and higher capacity capabilities of transformer-based systems. The under-200 kW applications can benefit from the high efficiency and excellent input power conditioning through active components offered by transformer-free designs. In addition, the scalability of a modular transformer-free UPS can help avoid over-provisioning while maintaining operational efficiency.
FACTORS TO CONSIDER
Both approaches use a double-conversion process (Figure Above) to provide power protection for mission-critical applications. The primary difference between the two technologies is in their respective use of transformers.
A transformer-based UPS may use a transformer before the rectifier and requires an isolation transformer after the inverter to derive the voltage being delivered to the critical load.
Transformer-free UPS designs use power and control electronics technologies to eliminate the need for an isolation transformer as an integral part of the inverter output section.
TRANSFORMER-BASED UPS DESIGN
Large systems are typically manufactured based on serviceable sub-assemblies and are available in discrete units rated up to 1,100 kVA. Key components of this design include:
- A passive filter (inductors and capacitors) on the rectifier input to reduce input current distortion and improve the power factor.
- A six-pulse (or optional twelve-pulse), semiconductor (SCR)-based rectifier on the input. Optionally, an additional transformer (Xfmr) provides AC-DC isolation for the DC bus and the battery.
- A DC energy storage system (typically a battery) connected directly to the DC bus between the rectifier and the inverter to provide AC output power ride-thru capability during a loss of AC input power. This example uses 540 VDC.
- An insulated gate bipolar transistor- (IGBT) based, pulse-width modulation (PWM) inverter on the output.
- An isolation Xfmr on the inverter output to derive the appropriate output voltage. This also provides a convenient and solid point for referencing the AC output neutral to ground. This neutral ground connection provides excellent common mode noise rejection.
- A passive filter on the inverter output to provide a very low distortion AC voltage supply.
- An automatic bypass switch (static switch) using power SCRs provides instantaneous switchover to an alternate source if a UPS output disturbance occurs.
TRANSFORMER-FREE UPS DESIGN
Transformer-free UPS topologies replace simple passive magnetic-voltage transformation functions with solid-state power electronics circuitry. Figure above shows a simplified block diagram of a transformer-free UPS design. There are a number of key differences listed below between this circuit and the unit depicted in the figure.
- By replacing passive power components (transformers, capacitors, inductors) with power circuit assemblies utilizing PWM power conversion techniques, transformer-free UPS rectifiers are physically smaller and produce low input current harmonics with near unity input power factor.
- Typically, the UPS battery in transformer-free applications is connected to the internal DC bus (about 800 VDC in the previous example) through an integrated bi-directional DC-DC converter. This puts an additional power conversion element in series with the battery.
- Using similar PWM power conversion techniques, transformer-free UPS inverters are physically smaller as well, and produce low output voltage harmonics over a wider range of connected load characteristics.
- The bypass function (static bypass switch) is similar to the transformer-based design. However, without external transformers added, the bypass AC input must be the same voltage as the inverter AC output.
- Transformer-free UPS are typically designed and styled for both computer room in-row lineups and equipment room installations. Complete transformer-free UPS units are typically an assembly of standard frames plus functional control and power modules.
- A transformer-free UPS is lighter and smaller than the power-equivalent transformer-based design with both physical volume and footprint being less. And, according to the fall 2012 Data Center Users Group (DCUG) survey, data center energy costs and equipment efficiency are the top-of-mind issue for DCUG members, with nearly half of the respondents listing it as one of their top facility/network concerns.
However, other external transformers may be required for AC-DC isolation purposes, safety reasons, AC voltage changes, or to provide power distribution flexibility. With the addition of external transformers, the overall facility weight and footprint totals may be higher than with a transformer-based UPS design with implications for end-to-end system efficiency. If transformers need to be added to a transformer-free unit to make it compatible with a facility, a transformer-based unit may be a better solution.
Transformer-free UPS topologies have emerged to meet the demand for more efficient, flexible, smaller footprint, lighter weight UPS systems. The price of these performance feature improvements has been the replacement of a few robust but physically large, passive components, such as transformers, inductors, and capacitors with functional power electronic equivalents packaged in field replaceable, modular sub-assemblies. It is reasonable to expect that the transformer-free units will have service call rates somewhat higher than their transformer-based counterparts.
TECHNICAL FEATURES AND PERFORMANCE DIFFERENCES
In choosing between transformer-based and transformer-free UPS solutions, a system designer should determine where transformers are best utilized and whether they should be internal and/or external to the UPS in view of physical and electrical distribution requirements and tradeoffs. It’s important to review the techniques and tradeoffs utilized in the various rectifier, DC energy storage, inverter, and static bypass functions of these two UPS designs for various UPS system performance functions including:
• Site planning and adaptability to change
• Reliability and availability
• Robustness
• DC energy storage system isolation
• Engine-generator interface
• UPS output interface considerations
• High resistance grounding
• Fault current management
• Arc flash energy
• Isolation
• Maintainability
• External components needed to complete the system design
• Total cost of ownership
• Capital expenses (CAPEX) and operating expenses (OPEX)
When considering the total cost of ownership for these two architectures, it is important to include both the initial upfront or CAPEX as well as the ongoing or OPEX to power, maintain and service various options.
Technological evolution is constantly impacting the relative efficiency of transformer-based and transformer-free solutions. After years of optimizing performance, transformer-based UPS systems have achieved a relatively flat efficiency curve from 30% to 80% loading where typical tier 3 and tier 4 data centers operate. The latest transformer-free designs also have very flat curves down to as low as 20% of capacity, and have efficiencies in the 95% to 96% range in double-conversion mode. A study of the whole system design is necessary to determine the relative efficiencies as the addition of transformers, and the efficiency of those transformers, will have an impact.
In summary, transformers, whether internal or external to the UPS, are necessary to establish circuit isolation and local neutral and grounding points, as well as to provide voltage transformation points. This facilitates, for example, the implementation of very high power density installations based on 600V distribution sources, subsequently stepped down to 208/120V for IT load applications. When transformers are utilized in conjunction with the UPS internal DC link, DC-to-AC output, and AC-to-DC input isolation can be provided, reducing or eliminating the risk of DC faults propagating upstream or downstream of the UPS.
About the Blogger
Strategic Media Asia (SMA) is one of the approved CPD course providers of the Chartered Institution of Building Services Engineers (CIBSE) UK. The team exits provide an interactive environment and opportunities for members of ICT industry and facilities' engineers to exchange professional views and experience.
SMA connects IT, Facilities and Design. For the Data Center Consideration Series, please visit
(1) Site Selection,
(2) Space Planning,
(3) Cooling,
(4) Redundancy,
(5) Fire Suppression, and
(6) Meet Me Rooms