Does performance rule in mainframe storage?
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Faster, faster, faster has always been the mantra for computer and computer-peripheral selection. Make it faster. Buy the fastest one you can get, because performance is king. That has been especially true in storage products. But now there's a new research note from Gartner Research that says this isn't the case any more in the case of storage products for mainframe applications.
Is performance still a key factor in purchasing mainframe storage? Perhaps not, says Gartner Research in a recent research note.
In a note titled "EMC Enhances the Symmetrix 8000 Family", Gartner says that "in the large majority of procurements, [of mainframe-class storage] performance should not be a major factor in decisions."
There are two reasons for this, the note says. "One is that the current high-end storage boxes configured with adequate capacity from all the vendors are powerful enough to operate below saturation and therefore are adequate to do the job. The second is that the differences in software, SAN solutions, service and support, functionality and price are far greater and therefore more deserving of the user's attention during the evaluation process."
For one thing, the note points out, all the storage vendors use the same few models of disk drives. The major factor in determining application performance is latency. Given that the vendors are using the same disks, the latency there is going to be quite similar. The other factor in such performance is cache latency and the note says that "All the vendors have a very low latency to cache."
If performance is an issue, which the note estimates to be "a small, single-digit percentage of the procurements", the purchaser faces a major difficulty in getting enough information to make an informed decision. The only really satisfactory way, the note says, is to benchmark the products using actual applications data, which is expensive and time-consuming.
"Finally," the report goes on, "a gentle reminder that the typical data sheet claims from vendors based on sums of theoretical bandwidths, measurements of single-block repetitive reads, cache sizes and other marketing methodology are absolutely unreliable as indicators of actual application performance. Let the buyer beware."
About the author:
Rick Cook has been writing about mass storage since the days when the term meant an 80K floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last twenty years he has been a freelance writer specializing in storage and other computer issues.