Highreliability data centers support multiple applications. Some serve as telephone switching, transmission facilities, and data transmission (network) facilities while others are used for financial services and ecommerce. Within these diverse industries, design engineers increasingly disagree on one important issue. Are data centers best served by alternating current (AC) or direct current (DC) power distribution systems? Some prefer AC, connecting their switches, hubs, memory systems, and CPUs via a cordcap (i.e., a 120V plug). Others prefer DC, connecting their equipment to battery systems (via a 48V connector). Although both “sides” offer legitimate advantages, neither system model (as traditionally applied) represents a bestcase scenario.
Engineers who design traditional AC or DC power distribution systems frequently use a timetested collection of assumptions. However, relying on these assumptions can sometimes lead to inadequate design solutions. This becomes especially obvious when you measure each design against the basic question: “Have you maximized your reliability relative to your investment?” Traditional system models, from both the DC or AC “schools of thought,” often fail the reliabilityinvestment test. But by combining the best features of AC and DC design, you can identify these assumptions more clearly and create better designs. Let's look at some examples.
Example 1: AC System Design
Over the past 15 years, UPS manufacturers have created a series of modular UPS systems to help you attain the highest possible reliability. However, this approach doesn't yield the highest potential system reliability. Reliability calculations based on the IEEE Gold Book show that a system organized in independent strings, to eliminate common points of failure, can yield a factor of 70 improvement in reliability over a system using a common bus to connect redundant UPS modules. The traditional assumption here seems solid: If one component (the UPS in this example) is improved with respect to reliability, then the entire system has improved reliability. However, when considering the notion of the weakest link, you quickly see that this is not always the case. Rather than focus on individual component reliability, we should focus on system reliability.
Example 2: DC System Design
Engineers often associate DC systems developed in the telephone industry with 4 or 8 hr of battery backup — relying on this approach to allow sufficient time to start standby generators. However, highintensity data centers are prone to overheat rapidly, usually in between 5 and 20 minutes. As a result, batteries that strictly serve equipment loads increase system reliability only until cooling becomes a problem. It makes no sense to buy batteries with a timeinstandby that greatly exceeds the time it takes to overheat the facility. In general, running mechanical cooling equipment on battery power is not economically feasible. The assumption we are challenging is that tuning the parameter of one element in the system (specifically timeinstandby) will improve that parameter for the whole system. Instead, we must include each element's timeinstandby as a constraint in our system reliability analysis.
Example 3: Comparing Costs of AC and DC Systems
This example plays on the other side of the reliabilityinvestment test: the investment or cost side. Let's begin with an analysis of traditional DC power distribution. DC power distribution systems do not scale well. Let's take a closer look to see why.
At the simplest level, a DC power converter should cost less than a UPS. This is true for small capacity systems, say 50kW. So far so good. In some data centers, 50kW provides power for about 500 sq ft of floor area. So you usually choose to scale up the areas you serve with a UPS (in the AC model) or a DC power converter (in the DC model). For example, a 10,000A DC power supply operating at 48V nominal supports about 5000 sq ft of data center at 100W per square foot. But the cost for a 10,000A DC power converter is about the same as the cost of a 500kW UPS system.
The reason for this is easy to spot in the circuit capacities. The UPS delivers a 480V, 3phase feed rated at about 800A. This is about 12 times less than the ampacity of the DC system. Thus, when it comes time to distribute power, the cost of switchgear and cable is much higher for the DC system than for the AC system. To make matters worse, once 48V DC distribution travels more than 35 ft to 50 ft (depending on the circuit capacity), you need to increase wire size to reduce voltage drop. For larger DC systems, this adds cost to an already expensive system. For example, the added cost for additional copper in an 11,000 sq ft data center designed with DC power is about $250,000.
The key question you must ask is: “Where does the DC system become more expensive?” Look at the graph of cost vs. capacity (above). Many cost factors, including selection of manufacturers, can modify the DC and AC cost curves. But you can generally interpret the graph as follows:

For small systems, DC will typically cost less than AC systems of the same power capacity. However, the cost of DC systems increases rapidly as the capacity of the power converter is increased.

The cost per kW of AC systems, due mostly to the cost of UPS, goes down until the UPS power capacity reaches a point between 400kW and 500kW.

Somewhere between 100kW and 200kW, the DC and AC cost curves cross; this is the place where DC and AC systems cost about the same.
Attacking Assumptions
This discussion is based on a traditional DC power distribution system and holds true as long as you allow the following two assumptions to guide your design:

As you increase the floor area served, you will serve the facility from a central battery system.

Keep your entire DC distribution system at 48V.
Let's challenge these assumptions. First, if you drop the design requirement for centralized batteries, you can explore a distributed DC system. In this model, redundant DC power converters serve battery packs located at each row of equipment racks. This model has several advantages:

Branch DC wire runs do not increase in length or wire size with increased floor area because you have a separate DC bus with each row of equipment cabinets.

You can defer the cost of DC power converters and battery units until each particular row of cabinets requires power.

You can define the level of reliability (redundancy) differently for each row of equipment racks (i.e., one row can use a single DC connection while another row can use dual DC connections).
If you drop the design requirement for 48V, you can distribute DC at a higher voltage and avoid the huge currents and oversized cables. Then, you can use either a DC system following the model above (with distributed DCDC voltage converters) or an AC inverter at each rackrow or at each PDU to deliver power to equipment racks.
In Example 3, we exposed two assumptions, each of which, when tested, creates a new opportunity for a design solution. In fact, manufacturers selling data center distribution equipment presently support both of these two design solutions.
WrapUp
This article suggests you can achieve the best system designs by directly challenging traditional design assumptions. This means you must always provide circuit connections in strings to avoid common points of failure, select the battery (or ridethrough) time based on a reliability calculation that includes time analysis for the entire system, use modular solutions sized to serve small areas (for example, a single row of racks), and consider highvoltage DC distribution systems.
It looks like AC and DC power distribution systems are here to stay so the debate will inevitably continue for some time. But that's not all bad. This is actually good for the industry because it focuses attention on both the assumptions and the design goals for data centers, paving the way to improved solutions.
Doug Bors is Vice President, Technology Consulting & Research at Sparling. You can reach him at [email protected].