Joe Meyer, senior storage architect at Level 3 Communications LLC, manages close to 1
"The downside to the DMX-3 series is that each rack requires two 220V power whips," he said. The older arrays "would run on two or four plugs for the whole box, now you need two per bay," meaning that fully loaded, the box is drawing more power, he explained. Meyer said he thinks it is "over-engineered a little bit," and that each rack doesn't need all that power. However, it's designed this way so that each bay can support the rest in a failover configuration.
Meyer noted that EMC is managing Level 3's hardware for the company, and therefore he doesn't need to worry about watching the boxes too closely for power consumption issues. "There are other more important things for us to track, like utilization," he said.
EMC doesn't break down individual power and cooling metrics for its users, as the products all have a "call home" feature that reports directly to EMC when resources inside the box are breaking down. In general, Wambach said people should be thinking of power consumption in a larger context than just hardware. "How do you consolidate platforms and drive up efficiency? Are you on the correct storage tier for a certain application, and are you getting the best power per capacity rates?" he asked. "We've seen customers at a point where they are completely maxed out of power and literally have to unplug something before they can roll something else in."
To help gauge power requirements before this kind of crisis occurs, EMC's sales team uses a tool called the Symmetrix Power Calculator that shows how much power a specific configuration will draw and what the cooling requirements will be. The company is looking at a potential service offering around evaluating users' existing resources for future requirements. "We look at what kind of trajectory the customer is on and how soon they will run out of power," Wambach said.
Level 3's Meyer noted that workloads change so often, this tool can only really provide a "good estimate."
For businesses in Europe, where energy costs are almost twice the price of U.S. rates, the situation is a little more serious.
Stefan Schneider, storage manager at Helsana Versichweungen AG, said better power management in the disk subsystems would be "perfect" as the amount of disks his company is using is increasing enormously. Helsana has a Hitachi Data Systems Inc. (HDS) USP600 with 32 TB and a 9585V with 30 TB, double its capacity of a year ago. Neither system is equipped with power management functions today. "It would definitively make sense having a power and heat meter in the arrays so we can see the actual figures with history and trends," Schneider said.
Like EMC, HDS collects alarms and alerts from its systems directly, but it supplies no diagnostics capabilities in its arrays that report to the users.
"Power requirements are top of mind among our customers. But where's the best place for a reporting mechanism … Maybe it should be in a central data center power and cooling unit," said Steve Smith, product marketing manager at HDS.
Meanwhile, both EMC and HDS are promoting the idea of using cheaper, slower hard drives that use less power for applications that do not require immediate access as a way to manage this problem.
There are a couple of emerging technologies that address power and cooling for storage in a more radical way. Thin provisioning, pioneered by 3PARdata Inc., is one. This allows users to install fewer disks in their arrays by provisioning capacity on an as-needed basis. Another term for it is overallocation. It allows applications to be allocated more storage capacity than has been physically reserved on the storage array itself. Physical storage capacity on the array is only dedicated when data is actually written by the application, not when the storage volume is initially allocated. This saves on the operating costs -- electricity and floor space -- associated with keeping unused disks spinning. "If a 10 TB volume only needs 1 TB of physical disk, that's one-tenth the power and processing," HDS' Smith said.
The second technology, pioneered by Copan Systems Inc., is called a massive array of idle disks (MAID) architecture that only spins disks on when they are needed, cutting energy consumption and preserving disk life.
"EMC [and others] are taking traditional storage and putting cheaper drives in them, but this does not solve the problem, it's just a patch," said Aloke Guha, founder and chief technology officer (CTO) of Copan. "It's like saying I'm going to put incandescent light bulbs throughout my house when you don't have to have all the lights on in the first place … If you're not accessing data, why keep the power consumption and mechanics on?"
Copan's storage product has attracted 70 customers so far, many in the last three-to-four months as the energy crisis sets in. "Customers are doing the math, and they are afraid … they have to provide access to their data, but it's expensive to keep traditional systems constantly running," Guha said. He claims the Copan system uses 11W of power per 1,000 GB, versus a standard array that sucks up 51W. One of Copan's users in the Fortune 1000 bracket boasts energy savings of between $280,000 and $800,000, depending on type of disks used.
HDS said it's taking a look at the MAID architecture but has made no decisions about how it may use it at this stage.
In the coming months, John Webster, principle analyst at Illuminata Inc., expects all the major storage array vendors to start making noise about how their systems handle power consumption. "It could be a differentiator down the line," he said.