By Marc Staimer
Storage vendors like IBM and EMC incorporate RAID striping on the back-end of their respective storage arrays. As a storage admin, you hand out the LUNs as needed to the SAN-attached servers.
On the other end, my distributed counterparts feel that they should also stripe the LUNs at the OS level because they have always done this.
What is the relative performance impact with this second layer of RAID? With the cache front ends of the storage devices being so large, will this affect me near-term or long-term as the storage workload increases?
It depends. The server striping can still improve performance over and above the back-end striping of the array and it can also decrease performance as well. There are quite a few issues. How oversubscribed is each target port on the array? Oversubscription is the number of servers accessing the storage controller on a specific target port (4:1, 7:1, 12:1). Are the server-based striped LUNs spread across multiple ports? Multiple controllers? Multiple arrays? Are the applications and servers really striping or load balancing? Striped performance will always be brought down to the lowest common denominator. If the striping is across multiple ports on the array and one of those ports is incredibly busy, the performance will be limited by the performance of the busy port.
Cache will have zero impact on throughput performance. In fact, if you turn off the back-end disk write cache, you can even improve the throughput. Cache is primarily for transactions and only improves performance if you have a high hit ratio which is a rarity when you have multiple servers/applications. The best way to use cache is to lock down part of it for database indexes and hot files. This has a real value for DBMS improved performance.
In general, the cache has nominal value for server-based striping.
As I stated in the beginning, there is no absolute answer to your question. It depends.
20 Oct 2005