Just as the power crunch, which began in the server industry, has become the future for storage, the approaches to fixing the problem may also originate there, as well. According to some experts, recent research on the server side into methods to reduce the number of power conversions required to get energy from a wall socket into a computer could drastically reduce power consumption in both servers and storage if widely adopted.
In many data centers, according to Don Beaty, an IT consultant with DLB Associates and a member of the ASHRAE TC 9.9 committee, which focuses on data center power and cooling issues, converting between alternating current (AC, or wall power) and direct current (DC, or battery power) takes multiple steps.
Bill Tschudi, principal investigator for the applications team in the environmental energy technologies division at Lawrence Berkeley National Lab (LBNL), said he has been working with server vendors, as well as the U.S. Department of Energy (DoE) and the Environmental Protection Agency (EPA) toward finding a standard voltage and redesigning servers to take "direct" power, or power directly from a DC-based uninterruptible power supply (UPS), at a standard voltage level in order to cut out this problem. Tschudi said his group has yet to begin working with storage manufacturers, but the issue of power conversions can be applied to virtually any computer, both inside and outside, as today's motherboards also perform multistep, inefficient conversions between voltage levels, many of them based on outdated designs from the 1970s and 1980s when chips needed multiple voltage levels.
Moreover, cutting down on conversions could be the closest thing there is to a magic bullet when it comes to cutting down on power consumption in the data center as a whole. "Even against the most efficient of today's power supplies, direct power improved efficiency 10% to 15%," Tschudi said. "Against the average conversion system, you could be talking more like 20% to 30%, if this is adopted on a wide scale."
The upshot for the storage market
Right now, it remains unclear which direction the storage industry will move. Some vendors, like Sun Microsystems Inc., are gung-ho about standardizing voltage levels within arrays as soon as 2.5-inch form factor drives are here.
"Right now, typically you have 5 volt and 12 volt conversions within enterprise systems," said Chris Wood, chief technology offer (CTO) of Sun's storage group. Wood said the higher voltage is necessary to spin 3.5-inch drives at high rpms, but that once small form factor drives hit the market, they will require smaller engines, less voltage and could make the 12 volt conversion obsolete.
"The whole industry is moving to small form factor drives," Wood said. "This is one of the reasons why."
One storage vendor that's been playing an "observational" role with LBNL's efforts, according to Tschudi, has been Network Appliance Inc. (NetApp). "They've been observing this [conversion] project and have done some work with us when it comes to improving efficiency in their own data centers through improved cooling designs."
According to Brett Battles, director of storage product marketing, "NetApp hasn't been heavily involved in LBNL efforts to date. We are continuing to evaluate new power supply offerings, but at this time we don't have any specific details to discuss, due to the early stages of the evaluation process."
Hitachi Data Systems (HDS) is another major storage vendor eyeing the direct-power option. "While it is early in the discovery process, we are investigating the use of DC power and other energy-saving alternatives," according to Claus Mikkelsen, chief scientist.
The battle between reliability and efficiency
Not every storage vendor is sold on power conversion reductions as a panacea. Storage system manufacturers Xyratex and Dot Hill Systems Corp. already offer some systems designed for direct power. However, according to Ken Claffey, product lead for power supplies for Xyratex, reducing power in storage systems isn't as simple as cutting out conversions, or even building a better power supply.
"Storage systems are limited [in terms of power efficiency development] by two things -- the need to power disk drives at a high rpm and the need for redundancy and reliability."
Typically, in the interest of high availability, even the most efficient power supplies are over provisioned for the highest load in storage systems, even if it's twice the level the system really needs. Xyratex is always looking for higher efficiency power supplies, Claffey said, "but you have this battle between redundancy and efficiency." Xyratex is currently looking into a kind of dynamic load balancing for power supplies, which would allow them to change voltage according to system workload.
Dot Hill also said there are better methods for reducing storage power consumption than focusing on conversion. Among them, MAID, which Dot Hill said it has in the works.
Whatever the method, it's clear storage vendors are scrambling behind the scenes to catch up with their server counterparts, as it's clear that users, as well as the government, are beginning to take notice. The EPA submitted an energy efficiency report to Congress in May focusing mainly on servers, but breaking out what it called "first-order estimates" on storage devices. This preliminary report also estimates that the energy costs of total enterprise storage systems -- the networking, controllers and switches that surround disk drives themselves, would be 50% higher than the energy costs of the hard drives alone.
"The IT industry is a great beachhead for the DoE," he said. "It's a relatively small group of companies that make up 1.2% and climbing of the country's overall energy usage."