This article can also be found in the Premium Editorial Download "Storage magazine: Storage Products of the Year 2005."
Download it now to read this article plus other related content.
WHEN QUANTIFYING the performance of a storage array, there are two main metrics to consider: IOPS, or the number of I/O requests the array can process per second; and raw throughput, which is measured in MB/sec. The Storage Performance Council (SPC), a non-profit consortium of 26 storage vendors, has long supplied the SPC-1 benchmark for measuring IOPS. This past December, the council released the SPC-2 benchmark, which looks at the large-scale, sequential movement of data.
Specifically, SPC-2 consists of three separate workloads: processing large files (like those used by CAD/CAM and scientific applications), large database queries (for data warehousing and data mining), and video on-demand. The results of those three workloads, measured in MB/sec, are then averaged to produce the SPC-2 MB/sec metric; the total cost of the array is then divided by the MB/sec figure to derive the SPC-2 Price-Performance result.
So far, four vendors have submitted SPC-2 results: Fujitsu Computer Systems with its Eternus6000 Model 900; Hewlett-Packard for its StorageWorks Enterprise Virtual Array 8000; IBM for its TotalStorage SAN Volume Controller (SVC) 3.1 and DS8300; and Sun Microsystems with its StorEdge 6130 and 3510.
The SPC-2 results submitted thus far have delivered few surprises. IBM's SVC 3.1 had the best SPC-2 MB/sec result with 3,517.75, but it also had the highest SPC-2 Price-Performance ratio at $563.92. The lowest SPC-2 MB/sec number was from Sun's 6130 array, which also had the lowest Price-Performance result--$137.52.
But that was probably SPC-2 members' intent, says Craig Parris, competitive analysis engineer at Seagate, which participated in the SPC-2 committee. "Wave 1 [of the SPC-2 benchmarks] wasn't intended to be high performance; it was to show the potential of SPC-2," he says. Vendors, he predicts, "will take the gloves off in 2006, and marketers will start saying 'Mine is the fastest.'"
Even so, Tony Asaro, senior analyst at Enterprise Strategy Group in Milford, MA, says that as far as benchmarks are concerned, SPC doesn't have much clout among end users. "I asked about 20 end users what they thought of SPC; 18 hadn't heard of them and the other two didn't think very highly of it," he says. SPC's problem, Asaro says, is that it's "easy to manipulate" and "it's still the vendors themselves that are performing the tests."
Geoff Hough, director of product marketing at SPC member 3PAR, disputes that idea. "The results are heavily audited," he says, and full disclosure requires vendors to publish the details of their configurations. "We try hard to create benchmarks that can be reproduced in customer environments."
One of the things hampering end users' acceptance of the SPC benchmarks may be the refusal of key vendors like EMC, Hitachi Data Systems and Network Appliance (NetApp) to benchmark their arrays, while simultaneously professing their performance. "They're in a bit of a contradictory position," Hough says. NetApp, for example, has a white paper comparing the transactional block performance of its FAS3020 array to EMC's CX500 based on a test that Hough says closely resembles the SPC-1.
"By publishing those results, they're suggesting that performance is a key purchasing criteria, yet only if the tests are performed on their terms," says Hough. Similarly, he says, EMC happily publishes SPEC-SFS results, but dismisses SPC as not being "real-world."
This was first published in February 2006