RAID striping or concatenation
Designing storage for performance is a very esoteric effort by nature. There are quite a few variables that need to be taken into account.
As you can see, there is a bit more to think about than just which RAID type to use.
Using a volume manager or other type of virtualization on top of standard RAID-5 based LUNs (which is what you are doing) is one good way to improve I/O performance. One thing to consider is what happens when the application runs out of disk space, and you need to add storage for capacity. If you are striping across a few LUNs in say, a Veritas disk group, and you need to expand the volume, application performance may be affected during the expand operation.
If you are using LUN concatenation, you would simply add a new LUN to the group and expand the volume to include the new LUN, which would have less impact on the application. As long as the database is designed properly to use all LUNs in parallel, then using the concatenation method (something like one file system per LUN) would result in a trade-off between performance and ease of management for the application. When using RAID-0 striping on top of RAID-5 LUNs, make sure your LUNs are all provisioned from different parity groups.
If you are using meta-volumes in the array, there may be a chance where you would stripe within the same parity group on the array, which could affect performance and defeat the reason to stripe in the first place.
Database guys always like to go with larger numbers of smaller disks for performance. The problem is you can't buy 36 GB drives anymore, and 146 GB will be the smallest drive available fairly soon. Lots of smaller drives is good for random I/O, but fewer larger disks can work fine for sequential transaction log I/O.
A good methodology would be to use partitioning within the RAID groups on the array to minimize seek time from the outermost disk cylinder to the end of the partition, which in effect creates your smaller drives for you. You could then use the outermost partitions on multiple RAID-5 groups to create your RAID-0 stripe within the volume manager. When assigning LUNs, use as many storage ports as possible, and spread the load across all of your HBA's within the server. This will not only increase the available queues for the operating system, but also maximize available bandwidth to the disks.
As far as queue depth is concerned, most HBA drivers default to 32 per LUN, and 256 per port. Storage array ports support either 256 or 512 queues per physical port (some support more, some support less. Check with your vendor) The trick is to use as many queues as you can without running into "queue full" conditions. You can change a driver's queue settings via the bundled application (such as HBA anywhere), and try increasing the depth per LUN to 64 or 128, and see what happens.
Using more LUNs per volume manager disk groups is better than fewer, since this increases available queues (at least 3 should be used per group for availablity reasons).
It is also advisable to keep random and sequential workloads separate. Use different RAID groups for the workload type to keep them separate. It is also best practice to keep database log volumes on different physical spindles than the database itself.
Designing storage for performance is a very valuable subject, and one that can be confusing to someone starting out. One way to gain insight into what the pros do, is to visit the storage performance council Web site, or the transaction performance council Web site to see how they provisioned storage for the tests they ran.
Dig Deeper on Storage management and analytics
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.
Meet all of our Storage Technology experts
Start the conversation
0 comments