Canadian Consulting Engineer

Critically Complex – Data Centre Design

June 3, 2015
By Peter Sharp, IBI Group

Data centre facilities are far from being commodity, "off the shelf" designs. They require a team that can identify HVAC and electrical systems for each owner's precise business needs.

From the May 2015 print issue, page 16

As consulting engineers we ply our trade by solving problems, typically problems that require a technical analysis as a step towards meeting some organizationally beneficial goal. We offer to the owner — our prospective client — the benefit of our scientific knowledge, analytical skills, technical experience and our capability to devise economical and viable solutions.
Expectations of excellence seem all the more likely to be achieved when the engineer has a thorough knowledge of what the client does and wants. We know this to be true because no request for proposal (RFP) for professional and consulting service is written that does not place some degree of importance on the need for the consultant to have “done it before” and to provide examples of their experience with projects of a similar nature.
But do the benefits of having the consultant possess prior expert knowledge of the owner’s business still have significance if the subject is a commodity item? And data centres are rapidly being perceived — rightly or wrongly  — as a commodity. To be a commodity an item is so familiar that there seems little need for an expert to prepare solutions that were once viewed as complex; solutions can be taken “off the shelf.”
Is the educated owner so familiar with the fulfillment of a data centre design that there is little room for a consultant’s expertise? What are the expectations of the owner in issuing an RFP? Why is the owner pursuing a course that has tolerance only for their preconceived solution? Where is the true value of the consultant if their only purpose is to draft and seal contract drawings?
In preparing an RFP most owners will call for the consultant to provide an organization chart detailing how the design team will be constituted and assembled. Included at a minimum will be the project manager, the mechanical, electrical, structural, and civil engineers, and for anything but the simplest refit, there will also be an architect — or at least there ought to be. Everyone is necessarily involved to keep the job moving, on track, and in line with what the owner wants.
Inevitably missing from the consultant’s “org chart” will be one individual that some informed owners are unwittingly asking to be included, unaware of their own prescience. That individual is the “Technical Liaison” between owner and the design team. Sophisticated facilities such as data centres have a pressing need for the services of a design team that includes a Technical Liaison, someone who as an engineer, or designer or technician, has an intimate working knowledge of the Information Systems world and who can identify the owner’s operational, business and performance requirements and articulate them to the engineers for execution in the design.

Design dictated by the business
It might be a disaster recovery business continuity centre, a bank data processing centre, a utility operations control centre, a transit authority operations and control centre, or a “co-location” data facility, each of these are technical spaces for the operation of equipment that is deemed by a business to be indispensable.
Whatever the project, the design requirements of data centres and IT facilities will be dictated by the business. First, the facility must meet the functional demands and deliver service without interruption. It must be economical to operate, be able to survive a local disaster; and cause no unnecessary harm to the environmental ecosystem. To these requirements must be applied the design constraints, which vary from cause to cause: green field or brown field; size of real estate available; capacity of available electrical power; investment limitations; proximity to sources of interference, to a market, to services, to consumers of noise or air pollution; issues of municipal zoning, and more.
The wisest first step for the owner is to undertake a feasibility exercise coupled with a requirements analysis and requirements optimization exercise. The owner with participation by the consultant will set out to list the essential and known requirements, classifying them into degrees of “mission critical.”
For the typical control room the engineer is asked to create a facility that is as robust and resilient as possible given the specific constraints of the site and purpose. But the engineer is mostly constrained by cost, both capital and operating.

Cooling options: just open the windows?
The design equation begins and ends with heat. Injecting energy into the facility is easy: provide electrical power —  lots of it. Removing the energy is another matter. Although options have been tried such as direct cooling of the active equipment by chilled liquid, the most beneficial arrangement today is still forced air cooling.
Another option that was first seen as sarcastic — just open the windows and let the heat out — has taken on a real existence: unconditioned free air cooling, where outside fresh air is blown through the facility, is a reality, but extreme care has to be taken when applying these simplistic solutions. Contaminated “fresh air” will destroy equipment in no time. Attractive as free cooling may be in reducing the bills, the owner will pay by a reduced equipment lifecycle. On the other hand, a short equipment life cycle may not be an issue if the business case refreshes equipment annually. And not all free air cooling is bad quality.
There are many forms of cooling that take advantage of evaporation as a highly effective economizing technique and they are economic alternatives to more traditional methods.
The price for air or water side economizers is footprint. The engineer can design a space using free air and evaporative cooling and meet a specific target performance efficiency, but the system will be larger than another using conventional chillers with the same heat rejection capacity.
Higher efficiency heat rejection systems can often result in architectural sprawl, whereas a chiller plus chilled water system has the topographical benefit of allowing a separation between heat source and heat exhaust, giving the architect more options.
Rejecting heat wouldn’t be the problem if there weren’t so much of it. Although the trend of increasing heat loads doesn’t follow anything as simple as “Moore’s Law,” the demands for data processing equipment follow their own “more” law: more software applications, more data, more equipment, more servers, more data storage, more space, more power, and more availability.
Power Use Effectiveness rears its ugly head
Today, the governing factor of heat and power density is how much equipment can be accommodated in a single data cabinet. Part of the engineer’s challenge is to design around a varying value. Some cabinets will be strictly passive and have no heat exhaust, whereas others will operate with a demand of 25–30 kW per cabinet. A more typical high number is 15 kW. The tradeoff is real estate versus air flow.
Cabinets themselves offer resistance to the high air flow required to maintain safe working temperatures for the equipment inside. If the air flow can be reduced, so too can the power required to push the air around. Power to move air follows a “square” law, so twice the air volume requires four times the motivating power. Here the dreaded and much misapplied metric of PUE — Power Usage Effectiveness — rears its ugly head.
PUE is supposed to provide the owner a measure of just how good the engineer is at the job of designing a high efficiency facility. But there are so many ways to cheat the equation that it has ceased to provide any value, except as an operations optimization metric. By way of example, consider that every server assembly includes an array of fans to deliver cooling air to its hot interior. As the numbers get crunched and the interior components tend to be hotter, so the air flow increases. At high air flow rates the energy expended in the server fans becomes disproportionate to the useful energy, but this energy is still accounted for in the PUE equation as “IT equipment” load, not “Facility” load. This pushes the PUE down. In contrast, using the more efficient fans in the air handlers to increase the static pressure of the conditioned air in the data centre will lighten the load placed on the server fans. This pushes the PUE up. The more efficient arrangement has the higher PUE!  Here then is an example of how failure to take in the “big picture” can lead to unwanted yet rewarded results. The engineer’s challenge is to keep the big picture front and centre and not to allow the simplistic to prevail.

Uninterrupted power: pros and cons of batteries and flywheels
How to provide continuous power was at one time a de facto decision, namely to provide a battery, charge it while utility power is available, and discharge it through power inverters and hence to the equipment when it is not. This imperfect process — one that rectifies the supply to DC for the battery and inverts it to AC for the load — has an energy cost, so technology has endeavoured to reduce the inefficiencies of the double conversion process and with it reduce the risk of component failure. Numerous options are available to the engineer to reduce these losses.
Many eliminate the battery altogether and replace it with another form of energy storage, the flywheel. With advances in vacuum technology and frictionless bearings, the losses incurred by maintaining the rotation of a flywheel are small, and the reliability of a rotating mass is high. “Rotary UPSs” are of a size and form that makes them highly scalable, useful for the engineer who is designing to a moving and growing target.
One impediment to wholesale acceptance of the rotary UPS is that it sustains power delivery for mere seconds rather than minutes or even hours, the forte of the battery. Also, substituting a rotary storage unit for a battery still doesn’t rid the designer of the losses due to double conversion, so UPS manufacturers are calling up well established techniques and applying them to the data centre,  such as inline interactive power regulators that depend much less on power conversion.
Generators are often the target of the “what if it doesn’t work” syndrome. Too often the argument is made that more battery is needed in case the generator fails to start. But the “more battery” argument is specious because to fit up a facility with a battery large enough to carry the load through the maintenance window of a failed generator is simply not practical nor economical — unless the client is Telco operating under a different set of requirements. Generators, like other mechanical equipment, are prone to failure, but running carefully managed maintenance procedures ensures that they remain available. It is ironic that during the power failure of 2003 when so many generators failed to start, it was the cranking battery that was often found to be the weak link.
In sum, the design of a data centre or any other critical facility is not a process that can easily follow a well-worn path. It is not trite to observe that every facility is unique, because to say otherwise would be to imply that the owner’s design criteria are not unique. The final challenge to the engineer is to be tolerant of mixed and often conflicting criteria and accept the importance of compromise.     cce

Peter Sharp, RCDD, is a senior communications consultant with IBI Group. He is based in Toronto.

Advertisement

Stories continue below

Print this page

Related Stories