Several factors, including type of load, bus bandwidth and cache influence the performance of modern RAID systems....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Because of the large number of performance factors, it's often hard to disentangle the options enough to understand which ones will be most cost effective in your application when you're installing or upgrading a storage system.
The bad news is, of course, that storage system performance is generally highly application-specific. Different options produce different performance effects, depending on block sizes, file sizes, system loading and other factors. If you want the absolute best performance out of a storage system you have to analyze your load and carefully tune the various parameters.
The good news is that there is, at the very least, a lot of help available from vendors and others on what works best for which kinds of loads. Recently Alliance Systems of Plano, TX, for example, a VAR and manufacturer of computer equipment specializing in the communication industry, released the results of a study of performance factors in RAID arrays.
Alliance concentrated on the RAID controller and the paths to and from the storage array. Specifically, the company tested the effects of increasing the SCSI bus speed by going from U160 to U320 SCSI, increasing the size of the controller cache from 32 to 128 MB and increasing the PCI bus width and speed from 32 bit/33 MHZ to 64 bit/66 MHz. With the exception of the controller cache, all these parameters are fixed by the hardware installed.
As might be expected, each of these changes impacted different types of load differently. For example, moving from a U160 to U320 32-bit bus SCSI controllers produced a 165 percent speed increase in sequential writes of large files, but an 8 percent speed decrease in random reads of small files. A 64-bit bus reduced the positive differences between the U160 and U320 SCSI controllers, and increased the negative differences in some applications, although naturally the 64-bit controllers were faster overall. Moving from a 64-bit U160 to a 64-bit U320 SCSI controller produced a smaller gain in performance on large sequential files (47 percent) and a larger performance decrease (23 percent) in a mix of sequential and random reads and writes. This is probably because as the bus becomes wider and the bandwidth increases, other factors, such as disk seek time and cache issues become more important.
Increasing the speed and the width of the PCI bus also showed major improvements on the random writing of large files and sequential reading of large files. Here the wider, faster busses produced the best results. Reads and writes of small files showed slight decreases in performance with the increasing bus speeds and widths.
While SCSI type and SCSI and PCI bus width made major differences in performance, changing the amount of cache had less impact. As might be expected, writing small files improved considerably (123 percent) because more complete files could be held in cache. However none of the other tests produced more than a 19 percent increase and a number of the tests, notably with large file sizes, actually showed a decrease.
According to the Alliance report, increasing the PCI bus width and speed produced a larger performance increase than varying any other single factor. The largest percentage increases in performance came when increasing both PCI bus width and speed and going from the U160 to the U320 SCSI bus. However, the increases varied strongly by file type. For example one of the biggest increases in performance in randomly writing small files (121 percent) came from increasing cache size.
It is also worth noting that most of the tests of the effects on reading and writing files of the same size were not symmetrical. A change that produced a large increase in performance on sequentially writing a large file, for example, might produce a much smaller change on sequentially reading the same file.
A close examination of the data produced by Alliance study implies something else as well. Most enterprises probably don't need to fine tune their storage arrays because beyond a certain, fairly general, point, the results don't repay the effort. Especially in multiple systems all running a mix of applications, it is better to select a configuration and a set of parameters that work well, and use them on all the storage arrays, because the savings in time and effort will probably outweigh the benefits of precision tuning.
The full report is available on Alliance Systems' Web site.
Rick Cook has been writing about mass storage since the days when the term meant an 80K floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last twenty years he has been a freelance writer specializing in storage and other computer issues.