Building Systems: Hot Spots
With all the attention given recently to the telecommunications industry, we have also heard a lot about the high tech facilities that this sector demands. Telephone switches, control rooms and comput...
With all the attention given recently to the telecommunications industry, we have also heard a lot about the high tech facilities that this sector demands. Telephone switches, control rooms and computer rooms have been around for a long time. However, with the burgeoning growth of internet use and telecommunications, these kinds of facilities are larger than they used to be and they have special conditions that require sound judgment and engineering design.
Telecom hotels (or carrier hotels and co-location facilities, as they are sometimes called), and internet data centres (web hosting facilities) are similar in that they offer tenants strictly controlled environments with high power, cooling, reliability and security. Telecom hotels are generally targeted towards service providers such as Bell Canada, Bell Nexia, TELUS, AT&T Canada, Group Telecom, 360 Networks, and other smaller service providers, giving them facilities where they can interconnect their networks with each other. The landlord constructs a secure shell, provides power and cooling to the base building, and provides for the routing of services. Each tenant then fits up its own space, connects to the base building services, installs its network hardware, routes its fibre optic cables and then connects to the other networks as necessary.
Data centres are fitted up and operated by a single service provider. They are oriented towards business customers who want to outsource the management of their web sites and corporate wide area networks.
The size and location of the facility depends on its business plan. Facilities as large as 200,000-300,000 square feet (18,000-28,000 m2) are not uncommon in the U.S. There are three main drives as to location. First, there must be enough power from diverse sources available from the power grid. Second, there must be physically diverse sources of backbone fibre optic cable to satisfy bandwidth and reliability requirements. Finally, the location must suit the facility’s users. If they need to visit the facility regularly, easy access close to major roads and highways will be beneficial and a certain level of aesthetics will be required.
Providing adequate power in an economical manner is one of the main keys for the success of a facility. Engineers have done a phenomenal job at miniaturizing electronic components. They have not, however, achieved a proportionate reduction in the consumption of power or the efficiency of equipment. As a result, power densities have been creeping up over time. In the past, providing 30 watts per square foot was adequate for such facilities. Today they commonly demand 100 watts per square foot in the production areas, i.e. the areas where equipment is located. A single server rack can require over 6 kW of power alone. Based on recent projects, the expected total power draw for a 100,000 s.f., multi-service facility where there are servers, co-location facilities, and other network services as well as auxiliary support spaces for staff and customers, could be 8 to 10 MW. This figure includes the power draw for cooling and other utilities. Some industry analysts have predicted a need for 125 watts and higher per square foot (albeit prior to the recent downturn in the industry.)
Cooling the facilities is no small task. A 10,000-s.f. production area designed for 100 watts per square foot power consumption will require approximately 400 tons of installed cooling capacity when allowing for redundancy in the equipment units. This translates to 25 square feet per ton of cooling whereas a typical office environment is designed for 350 to 400 square feet per ton. Considering the multi-use facility described above that includes space for staff and customers, 100,000 square feet might require 2,000 tons of cooling. Engineers must also factor into their designs pressurization of the equipment spaces to ensure that unconditioned and/or dirty air does not infiltrate them. The pressurization schemes also assist in smoke compartmentalization during a fire.
All designs must factor in the reliability of the systems. Telecommunications customers generally have very little tolerance for service disruptions. “Five nines” reliability, or 99.999% up-time, is often quoted as an industry standard. This level of reliability means the occupants will tolerate no more than approximately five minutes of downtime per year. The cost of providing this level of reliability is sometimes prohibitive, and 99.99% up-time, or a tenfold increase in down time (50 minutes per year), might be considered as acceptable.
The importance of reliability cannot be over emphasized. On July 9, the Associated Press reported that Microsoft Corporation was experiencing the second major outage of the year to its MSN Messenger service. The outage was reported to be the result of the simultaneous failure of a piece of hardware and its back-up. However, the failure could just as easily occur in one of the mechanical or electrical systems. As engineers we cannot put our clients’ businesses at risk through inadequate engineering. On the other side of the argument, engineers cannot over-design to such an extent that it is not economically feasible to proceed with the business.
Reliability for the electrical and mechanical systems is achieved typically through provisioning “N+1” number of key system components, by avoiding single points of failure and mathematically evaluating the combined probability of failure of the total system under consideration. Dual power feeders from diverse sources, dual transformers and dual power distribution systems are common. Back-up to the power systems is provided through multiple UPS (uninterruptible power supply) units and standby diesel generators. Some facilities are designed to run for up to four days without needing to replenish the fuel supply.
When the cost and reliability of the main power source is in question the engineers may choose to have a co-generation facility on the site so that the facility is completely self sufficient from a power generation perspective. For the mechanical systems, multiple HVAC units are typically used to provide reliability and redundancy. The number of units per piping loop can be limited to minimize the potential for catastrophic failure and the equipment can be strategically routed with special containment measures to minimize the effects of leaks. The HVAC, glycol, hot water, back-up generator, lighting, transfer switches, UPS and fire alarm systems are highly automated. In addition, both the network equipment and supporting mechanical and electrical systems are monitored and controlled to allow for the timely shut down and shedding of non-core services and equipment should back-up power reserves become low or cease.
There are many other engineering design considerations that require consideration. The unique and extremely high security measures that are associated with these facilities, for example, need to be balanced against fire and life safety design requirements. Well thought-out space planning, exits and operational procedures are crucial to ensuring that neither system is jeopardized. The building envelope needs careful attention as the conditioned spaces operate at warm temperatures and high humidity levels. Air and moisture infiltration as well as condensation control are critical building envelope design issues.
As the telecommunications industry consolidates and restructures, business models for these facilities might change with the future. However, power and cooling densities are likely to increase even more, and their need for reliable power cooling and security will remain paramount.
Ivars Mikelsteins, P.Eng. is director of business development, telecommunications division. Morrison Hershfield Limited, consulting engineers of Toronto.