What you will learn from this tip: Designing storage for performance can be a very esoteric effort. Learn what...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
variables you should take into account during the design process.
A SearchStorage.com reader recently asked: We have a system running with data disks as RAID-5 LUNs. Which is better for performance in read/write operations, RAID-0 striped over small RAID-5 LUNs or RAID-0 concatenated over big RAID-5 LUNs?
For the sake of this discussion, I will assume you are using a host-based volume manager on top of a storage array which is providing RAID-5 based LUNs to the server. Designing storage for performance is a very esoteric effort by nature. There are quite a few variables that need to be taken into account:
- What is the nature of the I/O? (Random or sequential?)
- What is the read/write ratio? (80/20, 60/40?)
- What type of storage is being used? (Monolithic with global cache? Modular with dual controllers?)
- What type of disks are being used? (ATA, SATA, FC, solid state?)
- What size are the disks? (73 GB, 146 GB, 300 GB, larger?)
- What is the rotational speed of the disks? (5400, 10 K, 15 K?)
- How much cache is available?
- What OS? (Windows, Solaris, VMS, Tru64, HP-UX, Linux, AIX?)
- Is it a database application? (SQLserver, Oracle, DB2, Sybase, etc?)
- What is the stripe size used by the application?
- What is the stripe size used by the OS?
- What is the stripe size used by the volume manager?
- What is the chunk size used by the parity group?
- Can we match the stripe size to the chunk size?
- Which file system is being used? (RAW, UFS, QFS, NTFS, GFS, Vxfs, cluster?)
- How many drives per RAID group? (4, 8, 16?)
- What is the RAID type? (0, 1, 1+0, 0+1, 5, 5+0, 10+0)
- Software- or hardware-based RAID?
- How many spindles are available?
- How many HBA's in the server?
- How many ports on the storage array?
- How many switch hops?
- Director or edge switches? (140 port, 32 port, 64 port?)
- Is it a 1 Gb, 2 Gb, or 4 Gb fabric?
- What is the fabric design? (Core-Edge, Star, Pure Core, Meshed?)
- Who is the HBA vendor?
- What is the current queue depth being used?
- What is the best queue depth for the app?
- How many servers per storage port (Fan-in-ratio)?
- How many ISL's between switches? (Trunked?)
As you can see, there is a bit more to think about than just which RAID type to use.
Using a volume manager or other type of virtualization on top of standard RAID-5 based LUNs (which is what you are doing) is one good way to improve I/O performance. One thing to consider is what happens when the application runs out of disk space, and you need to add storage for capacity. If you are striping across a few LUNs in say, a Veritas disk group, and you need to expand the volume, application performance may be affected during the expand operation.
If you are using LUN concatenation, you would simply add a new LUN to the group and expand the volume to include the new LUN, which would have less impact on the application. As long as the database is designed properly to use all LUNs in parallel, then using the concatenation method (something like one file system per LUN) would result in a trade-off between performance and ease of management for the application. When using RAID-0 striping on top of RAID-5 LUNs, make sure your LUNs are all provisioned from different parity groups.
If you are using meta-volumes in the array, there may be a chance where you would stripe within the same parity group on the array, which could affect performance and defeat the reason to stripe in the first place.
Database guys always like to go with larger numbers of smaller disks for performance. The problem is you can't buy 36 GB drives anymore, and 146 GB will be the smallest drive available fairly soon. Lots of smaller drives is good for random I/O, but fewer larger disks can work fine for sequential transaction log I/O.
A good methodology would be to use partitioning within the RAID groups on the array to minimize seek time from the outermost disk cylinder to the end of the partition, which in effect creates your smaller drives for you. You could then use the outermost partitions on multiple RAID-5 groups to create your RAID-0 stripe within the volume manager. When assigning LUNs, use as many storage ports as possible, and spread the load across all of your HBA's within the server. This will not only increase the available queues for the operating system, but also maximize available bandwidth to the disks.
As far as queue depth is concerned, most HBA drivers default to 32 per LUN, and 256 per port. Storage array ports support either 256 or 512 queues per physical port (some support more, some support less. Check with your vendor) The trick is to use as many queues as you can without running into "queue full" conditions. You can change a driver's queue settings via the bundled application (such as HBA anywhere), and try increasing the depth per LUN to 64 or 128, and see what happens.
Using more LUNs per volume manager disk groups is better than fewer, since this increases available queues (at least 3 should be used per group for availablity reasons).
It is also advisable to keep random and sequential workloads separate. Use different RAID groups for the workload type to keep them separate. It is also best practice to keep database log volumes on different physical spindles than the database itself.
Designing storage for performance is a very valuable subject, and one that can be confusing to someone starting out. One way to gain insight into what the pros do, is to visit the storage performance council Web site, or the transaction performance council Web site to see how they provisioned storage for the tests they ran.
For more information:
About the author: Christopher Poelker is Christopher Poelker is a storage architect at Hitachi Data Systems, and SearchStorage.com's storage networking expert.