This article can also be found in the Premium Editorial Download "Storage magazine: Continuous data protection (CDP) and the future of backup."
Download it now to read this article plus other related content.
Despite the attention they get, storage benchmarks can be manipulated to unfairly compare products with vastly different configurations.
Although storage performance is one of many considerations when selecting a storage system, performance benchmarking results get the most headlines. IBM Corp.'s July news release that touted the record-breaking Storage Performance Council (SPC) result for its System Storage SAN Volume Controller (SVC) 4.2 is a prime example of how companies play up their benchmarking news.
It's no secret that storage vendors are eager to cite performance improvements of their latest arrays, often without any reference to the configuration, under what conditions the performance boost can be expected or how the testing was conducted. For example, EMC claimed earlier this year that "The new EMC Symmetrix DMX-4 series will improve performance by up to 30%," but failed to say under what conditions and in what configuration it tested the DMX-4. If performance benchmarking is mostly a marketing tool for storage vendors to pump up their products, are benchmarking numbers of any value to users?
The benchmarking challenge
At first glance, measuring the performance of a storage system doesn't appear to be too difficult a task. But the benchmarking process can be easily manipulated because of the large number of variables that influence performance results. With everything else unchanged, performance greatly depends on
To make matters worse, the IO profiles of real-world applications vary widely. Benchmarks will always have a limited number of workloads and IO requests, which might not be completely representative of a specific application. Testing the raw performance of storage systems irrespective of apps is a valid way of looking at storage performance as long it's understood that the hardware performance may not be representative of an application's performance.
This was first published in October 2007