Published: 12 Jun 2006
The heat is on
IT'S SUMMERTIME, and hot and sultry days are on the way. With little beads of perspiration dampening your forehead, the distorted image you see through the shimmering waves of heat isn't the surf lapping onshore. It's your primary storage array simmering in the heat of a data center that's about to cook a lot of expensive silicon.
Heat--and power consumption--isn't a new problem. Data centers have been caught up in the bigger, better, faster vs. hotter and hotter conundrum since the first floor was raised in a room meant for computers. But it's taken a turn for the worse as vendors pack more into smaller packages to deal with data center real estate costs.
Highly compact blade-server installations catch most of the blame for elevating heat levels, but storage systems are becoming ever more dense, too (see "The rise of the ultra-dense array," for Stephen Foskett's insights on the density issue). Storage systems are also the most mechanical devices in the data center, with disks spinning away at up to 15,000 rpm and their actuator assemblies skittering above the disk surfaces. Toss in highly mechanical tape libraries and you have a lot of heat-producing activity.
We have to live with these storage architectures for a long time, so an effective solution for heat and power problems is both an immediate and long-term concern. Cutting-edge solutions like solid-state storage have been on the drawing board for a while, but are years away from practicality. You could network a few hundred iPods and get a couple of really cool terabytes of storage, but not even Steven Jobs could sell that idea.
Texas Memory Systems sells a system that uses DDR RAM memory, but it's a specialized unit intended for extreme apps that's very expensive and still uses disks to back up the memory. It cuts power consumption and produces less heat, so it might be a harbinger of future storage systems.
Most storage vendors address the heat issue with traditional methods that attack the effects rather than the source of the problem. Hewlett-Packard (HP) touts its Modular Cooling System, a chilled water system that attaches to one of its racks. Pricing starts at approximately $30,000, and the system works only with a particular line of HP racks. Similar products are available from IBM ("Cool Blue") and Liebert-Egenera (CoolFrame).
Some newer storage technologies address heat production without having to attach additional air-cooling devices. With MAID technology--most prominently championed by Copan Systems in its Revolution 220T and 220TX virtual tape library products--rather than keep all of the disks in an array spinning at all times, only those disks that dish out or receive data spin. This reduces electrical power requirements and decreases the amount of heat produced.
Virtualization techniques let you use your installed storage more efficiently, which might allow you to consolidate storage and eliminate underused disks--thus taking some heat-producing devices out of the picture. And even if virtualization doesn't allow you to actually eliminate disks, it will likely help you forestall adding capacity, which would add to the data center heat load.
Computing companies are aware of the problem and many are actively pursuing solutions. Dell, HP, IBM and Sun, among others, have formed The Green Grid, an industry group that plans to share best practices "to lower the overall consumption of power in datacenters around the globe."
The door is open for more innovation and while cooling is part of every product design, heat production needs to take a more prominent place in the process. The alternative is just too big a gamble for storage-dependent companies. If your storage systems get sizzled, not only is your company out of touch with its data, but VoIP and other messaging systems could go down, cutting off communications with the outside world.