SPC Benchmark 1C/Energy (SPC-1C/E) is an extension of the SPC Benchmark 1 (SPC-1)
The benchmark first takes measurements of the SPC-1 IOPS performance of submitted products under various conditions, including idle, as well as 100%, 95%, 90%, 80%, 50% and 10% of the full SPC-1 workload. SPC-1C/E also sets three levels of daily usage – low, moderate and heavy -- according to the number of hours in an average day a system can be expected to be idle, moderately used or heavily used (those numbers are currently set by the vendor).
Although the value of benchmarks such as those from the Storage Performance Council have been questioned because tests may not reflect real-world use of storage systems, Brian Garrett, technical director of ESG Lab at Milford, Mass.-based Enterprise Strategy Group (ESG), said SPC-1C/E is helpful because of today's emphasis on power consumption."Economic pressures combined with the power and space crunch in the data center has data center managers looking to make energy-smart decisions," he said. "The storage industry doesn't have Energy Star ratings for devices – this is the first real-world workload for measuring power usage in the industry."
Walter Baker, SPC administrator and auditor, said it's important to create a composite benchmark because the power draw of a system may vary under different workload conditions.
"Sometimes the power can increase on an idle system, which is very counterintuitive," he said. This is because some storage systems use the time they're not responding to application I/O to do housekeeping, which can sometimes keep the system busier than an application workload.
"With that knowledge," he added, "the end user can specify whether an existing system should do housekeeping every day, or understand whether or not that process is configurable before a purchasing decision is even made."
Calculating yearly energy costs
Another component of SPC-1C/E calculates the number of kilowatt hours the system can be expected to draw in a year based on the average nominal power draw. That number is multiplied by the cost per kilowatt-hour (kWh) to find the annual energy cost of the system.
The first published benchmarks are for IBM's System Storage EXP 12S with solid-state drives (SSDs) and Seagate Technology LLC's Savvio 10K.3 drive. The EXP 12S with eight 69 GB SSDs, combined with an IBM Power 570 9117-MMA server with eight CPUs and 16 GB of RAM attached to the EXP 12S through a PCI-X SAS RAID Adapter, resulted in an average nominal power metric of 162.72 watts and 121.31 IOPS/watt. Based on an estimated annual kWh cost of 12 cents, the total annual power cost of the tested configuration was calculated to be $171.05. For the Seagate Savvio hard drive, average nominal power was 201.53 watts and 17.43 IOPS/watt, for an annual energy cost of $211.85.
SPC-2 and end-user benchmarks to follow
SPC-1C was chosen for the first power consumption benchmark because reliably measuring the power draw of larger and more complex systems is often difficult, Baker said. An SPC Benchmark 2 (SPC-2C) spec comparing energy draw against sequential throughput workloads in small systems will be released in approximately a month. Benchmarks for larger enterprise storage subsystems will take much longer to develop.
In the next few months, the Storage Performance Council will add a tool that lets customers specify daily usage workloads in their environment against their own actual price per kWh of electricity. The energy analysis, however, would still be based on vendors' published average nominal power draw results.
Because a single 10K.3 hard disk drive from Seagate drew more power and cost more to power than the array of eight SSDs tested by IBM raises the question of what the IBM results would be with hard drives rather than Flash. But Bruce McNutt, a senior scientist and engineer at IBM and a member of the SPC-1C/E subcommittee, declined to say whether IBM had plans to submit a comparable benchmark based on hard disks.
The SPC's Baker made it sound unlikely that the Storage Performance Council will offer public comparative benchmark results. "We encourage end users, if the comparison is of interest, to encourage their vendor to produce those benchmark numbers, share internal benchmark results or conduct an audited benchmark test with SPC under NDA" if the vendor isn't comfortable publishing the results, he said.
But according to ESG's Garrett, a truly effective industry standard benchmark requires as much public participation from as many vendors as possible. "The only way we'll get more results to compare is if end users ask these questions as part of their requests for quotes from vendors," he said.
But Garrett expects customers to push vendors for these numbers. "It's a hot issue for end users right now, and we have these great new technologies like SSDs and drive spin down or MAID, with great potential to save energy," he said. "Until we know how much energy they save, we won't be able to effectively include them in purchasing decisions."