Problem solve Get help with specific problems with your technologies, process and projects.

Want better performance? Try a data plaid

A data plaid aims to improve disk access performance by making disk accesses to the physical disk drives more random.

It is well known that sequential and random accesses need different forms of optimization to improve performance. Whether the optimizations are applied as small block sizes for random operations or large sizes for sequential ones, they usually work best when the operations on the data closely approach completely random or completely sequential.

Ideally, you want the activity spread out evenly over all the physical disks in the storage system because this produces the best performance with random access. The problem is that much of the time operations on databases aren't all that random. There is a tendency to develop hot spots of high disk accesses in certain areas of the database. Over time these hot spots will average out and cool down, but they can still affect database performance.

One way to improve performance and cool down hot spots is to use a data plaid. Plaids are stripes on stripes -- software striping overlaid on the level 5 or 10 hardware striping on the RAID array. Because they represent another layer of striping (RAID Level 0), they are sometimes called "Level 100" or "Level 50", depending on whether the technique is applied over Level 10 or Level 5 RAID. Call them what you will, the aim is to improve performance by making disk accesses to the physical disk drives more truly random.

Plaids are not a panacea, and like many performance enhancements they require care in choosing the relevant parameters. In the case of a plaid, the technique works best in the case of small I/O blocks, often on the order of 8 KB. This works well with a lot of transactional database applications because the records being handled are small. EMC discusses plaiding in a white paper aimed at explaining best practices for its Clariion storage products in relation to Microsoft's SQLServer.

Rick Cook has been writing about mass storage since the days when the term meant an 80K floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last twenty years he has been a freelance writer specializing in storage and other computer issues.

Dig Deeper on Storage management and analytics

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.