Hot storage -- Power and cooling concerns

It costs more to run a storage device over three years than to buy it. Here are several steps you can take to cut, or at least control, spiraling storage energy costs.

This Content Component encountered an error

It costs more to run a storage device over three years than to buy it. Here are several steps you can take to cut, or at least control, spiraling storage energy costs.

Many total cost of ownership (TCO) models are seriously outdated and wildly inaccurate because they haven't been updated to include the increased cost to power and cool storage arrays, storage area network (SAN) switches and hosts.

"Through 2009, energy costs will emerge as the second-highest operating cost in 70% of worldwide data center facilities," declares Michael Bell, research vice president at Stamford, Conn.-based Gartner Inc., in a recent report. In addition, analysts expect U.S. companies will spend twice as much on power and cooling by 2009 as they did to acquire their IT devices. Today, servers account for 40% of the data center's overall power consumption. Storage isn't far behind, taking 37% of the overall power, says Bell.

Power costs aren't the only factor forcing organizations to rethink their TCO analyses. The cost of end-of-life disposal and emerging green regulations that require cradle-to-grave energy tracking -- costs that IT managers previously paid scant attention to -- also threaten to become significant factors. Even real estate prices are a factor as IT managers wrestle with packing equipment more densely into costly floor space or spreading it out to facilitate more efficient air flow and cooling.

"Power and floor space are probably our two biggest IT concerns right now," says Michael Thomas, special project director at a major Midwest financial organization with multiple data centers. Thomas has had to delay some project implementations while waiting for the electric utility to come up with more power.

While storage prices on a cost-per-gigabyte basis continue to drop, storage managers will find their best budgeting efforts undermined by power, disposal, energy tracking and real estate costs. However, the problem isn't insurmountable. Vendors are ramping up energy-efficient green systems and tools to manage energy usage. By 2011, Gartner's Bell expects power demands to level off or even decline as innovations and best practices combine to contain the problem. In the meantime, IT managers still have to deal with the problem.

End-of-life disposal challenges
Systems and storage gear contains hazardous materials. Organizations are legally responsible for how they get rid of old storage devices. End-of-life disposal of systems and storage will eventually fall under regulations like ISO 14000, which is a set of international standards addressing environmental management. It guides organizations in developing both their environmental management system and the corresponding audit program.

DOs
  • Understand and include cost of disposal in your TCO analysis
  • Identify all of the costs associated with safe disposal, including eliminating data from hard drives
  • Insist on an audit trail
  • Expect more green regulations in the years ahead
DON'Ts
  • Don't try to dodge responsibility; almost every component contains a traceable serial number
  • Don't try to dispose on the cheap; use a reputable operator
  • Don't try to dump overseas; rules are even more stringent in Europe and are quickly being adopted in Asia
  • Don't delay in the hope that green requirements will ease

Unsustainable cost increases

"The ugly secret of smaller, faster, cheaper is that just because we can make it smaller and buy more of it, doesn't mean it is any more energy efficient," says Bob Gill, chief research officer at TheInfoPro Inc. To the contrary, smaller and cheaper means companies are buying more devices and packing them more densely into the data center. Even if the individual devices use less power, the aggregate number drives up energy consumption.

Gartner projects that more than 50% of data centers will exceed 6 kW per rack within two years; Bell expects that number to rise to 70% to 80% within four years due to the increased density of IT equipment, and that the ratio of power to cooling will hit 1:1. In addition, electrical costs per rack will increase by a factor of four, he calculates. Previously, the ratio was 0.5:1. "The cost is basically unsustainable," concludes Bell.

IT must also begin to factor in costs for getting rid of end-of-life equipment. "Disposal now has to be part of the TCO analysis," says Adam Braunstein, senior research analyst at Robert Frances Group, Westport, Conn.

The price tag includes not only the cost of safe disposal but the cost of ensuring that data is effectively removed from disk drives. "A three-times overwrite is Department of Defense compliant, but you need at least a seven-times overwrite to be completely safe and 10 times is even better," says Braunstein.

The cheapest option is to increase storage utilization. "You want to increase the utilization of the spinning motors and platters that you already have," says Jonathan Eunice, founder and principal IT advisor at Illuminata Inc., Nashua, N.H. Once the drive is spinning, additional utilization essentially costs nothing from an energy standpoint.

Energy tradeoffs

Erie 1 Board of Cooperative Education Services (BOCES) is a longtime mainframe shop in West Seneca, N.Y., that provides applications and IT services to more than 100 public school districts in western New York. Chief information officer Carol Troskosky has moved the organization to new mainframes with the latest channel-attached storage. (Channel-attached describes the high-speed, direct interconnect between the mainframe and shared peripherals; in this case, shared IBM storage arrays.) She then boosted utilization by consolidating open systems using Linux on the mainframe, while capitalizing on the increased energy efficiency of big iron. Erie 1 BOCES has also joined with other agencies in New York to buy energy cooperatively. But Troskosky still expects energy consumption to increase. "We try to keep our energy costs as low as possible," she says, but the organization must still meet increased demand for its services.

Beyond consolidation, storage managers can deploy storage in more energy-efficient ways. If you don't need high performance, deploy 7,200 rpm or 10,000 rpm disks rather than 15,000 rpm models, as the slower speeds use less energy. Similarly, smaller form-factor (2.5-inch) disk drives require only 5 volts vs. 12 volts for standard 3.5-inch form-factor drives. Small form factors, however, usually have smaller capacity (see "Energy tradeoffs").

Direct current (DC) can also be an energy-saving alternative. According to IDC, DC-powered equipment allows a portion of the heat load to move from the servers to the rectifiers, reducing heat at the system level by 20% to 40% versus a traditional alternating current (AC)-powered rack. "DC offers some efficiency, but you're mainly moving the problem someplace else," says TheInfoPro's Gill.

Rearranging the data center

Another option is to rearrange the data center for better cooling efficiency. Bloomsburg Hospital is an open-systems shop that just built a new data center that will eventually house 70 servers, each with as many as six direct-attached disk drives. Robert Theiss, chief information officer at the Bloomsburg, Pa., organization, planned the new data center with energy and cooling in mind. "We were worried about putting in a greater [energy] load," he says.

The hospital turned to American Power Conversion (APC) Corp., West Kingston, R.I., to engineer a new power and cooling system. "Right now, we're running at about 40% of our maximum power," says Theiss, which leaves room for expansion. For maximum cooling, Theiss spread the servers and storage over racks set up in two rows separated by three aisles. AC units push cool air over the front of each row to cool the entire system.

The cooling rule of thumb for raised-floor data centers has jumped from 4 kW to 6 kW per rack. "Beyond 6 kW, you can't cool with just a raised floor. Today, a lot of gear is running over 4 kW per rack, which is getting close to the threshold," says Gartner's Bell.

In response, large organizations are creating hot and cool aisles, and using blanking panels within racks to assist with air flow. Cool air is pushed into the bottom of the rack from the cool aisle and exits as hot air from the top of the rack into the hot aisle (see "Data center design and air flow," below).

 

Offline savings

Another option is to move data offline. Tape not only costs less than disk but uses less energy and requires less cooling. In her analysis of SATA disk and LTO tape, "the cost to acquire, power and cool a disk system is almost eight times that of a tape library," says Dianne McAdam, director of enterprise information assurance at The Clipper Group Inc., Wellesley, Mass. Of course, this means giving up the performance of disk.

Online archiving storage system vendors, like Copan Systems Inc., offer disk systems that shut down the spindles when the data on them isn't accessed frequently. Copan has recently begun touting its energy efficiency, claiming to be five times more energy efficient per terabyte than conventional storage.

"Copan could deliver interesting energy savings," says McAdam. However, "whenever you power down disks, there are potential problems bringing back individual drives," she warns. Some data may not come back. Copan automatically powers up each idle drive at least once a month to check for data errors and rebuilds the drive if necessary.

At his Midwest financial organization, Thomas uses some Copan arrays -- but not because of any promised energy savings. "We use Copan in our biggest data centers to replace tape because of floor space issues," he says. The smaller Copan footprint was quite attractive. "When we look at all of our data center costs, real estate is still a bigger headache than power," notes Thomas.

After boosting utilization, rearranging the data center and moving data offline, storage managers are left with replacing storage devices with more efficient devices. Healthy Directions LLC, a large newsletter publisher in Potomac, Md., reduced its power consumption by 50% over the last few years by replacing old servers and consolidating DAS storage onto a 10 terabytes (TB) StoneFly Inc. iSCSI SAN, says Edward Brookhouse, principal engineer, network operations. However, he fears energy consumption will go up as the organization migrates to densely packed blade servers.

New tools and metrics

Some vendors, including EMC Corp., Hewlett-Packard Co. (HP), IBM Corp. and Sun Microsystems Inc., are starting to provide tools that measure power consumption at the device level to manage energy the way they manage other aspects of storage. New energy metrics are also entering the storage lexicon. Kilowatt and kilowatt per hour are standard energy metrics. When applied to storage, you get kilowatt/terabyte. A more common metric at this point is kilowatt per rack. Due to increased density, data centers today are pushing beyond 4 kilowatt per rack; by 6 kilowatt per rack, they're getting into a heat danger zone.

An individual drive uses 5W to 15W of power depending on its capacity, rotation speed, form factor and operating state, but "you can't just multiply the number of drives in an array by some average power rating to get a total," says Mark Greenlaw, senior director with storage marketing at EMC. The power consumption of the array is more than the sum of the power used by the individual drives. Controllers and other components consume power. Copan Systems proposes two metrics for archival data storage: storage density measured in terabytes per square foot and terabytes per kilowatt.

Storage managers also need to consider SAN switch power and cooling. Switches consume less power in the data center than servers or storage mainly because there are relatively fewer switches. Still, the power consumption of a switch is significant. "A large switch will use 1,000W [1 kW] or more," says Ardeshir Mohammadian, senior power systems engineer at Brocade Communications Systems Inc. Higher port density and performance increases switch power and cooling consumption. Don't be surprised to see kilowatt per port and kilowatt per gigabyte per second metrics soon.

Energy bills -- now running at $60 per square foot for the data center, according to Gartner's Bell -- currently go to the facility manager or chief financial officer, not to the storage manager. Data center space is handled by the real estate department. To lower energy costs, there needs to be more coordination among the disparate departments.

As energy costs and consumption rise, new tools -- from low-power chips to digitally addressable power supplies that can regulate power to the device's changing requirements -- are being developed to more effectively manage energy. Power, cooling, space and disposal are becoming integral, closely watched parts of the TCO analysis for every storage device the organization buys.

This article first appeared in the March issue of Storage magazine.

 

This was first published in March 2007

Dig deeper on Disk arrays

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close