Switch Communications says it is successfully cooling a section of its Las Vegas data center running at nearly 1,500 watts per square foot using air cooling. How are they accomplishing this?
The key to Switch’s high-density cooling is a design known as Thermal Separate Compartment in Facility (TSCIF), according to company co-founder Rob Roy. The ingredients in this approach include high-capacity AC units placed outside the data center area, and a tightly integrated hot aisle containment system for the racks. Here’s an overview:
* The cabinets are set on a slab, with no raised floor.
* Chilled air is delivered into the cold aisle near the ceiling rather than through the floor, and enters the cabinets through the front.
* Each cabinet fits into a slot in the TSCIF unit, which encapsulates the rear and sides of each cabinet, while the open front extends beyond the enclosure.
* The hot aisle containment system delivers waste heat back into the ceiling plenum, where it can be returned to the chiller.
Very cool video of the SuperNAP setup:
More pics of their T-Scif cooling system: http://www.switchnap.com/pages/tech-specs/thermal-scif.php
The statistics off their site:
407,000 square feet of space
250 MVA Switch owned substation
146 MVA of generator capacity
84 MVA of UPS supply
30,000 tons of system plus system cooling
30 cooling towers
100% heat containment using thermal-scif™
Designed for 1500 watts per sq. ft. density
Armed 24/7/365 military trained
Switch employed security staff