These questions are specific to Hitachi 9960 disk array.
1. What is the emulation that allows you to create max sized LUN? We have 18GB disks in the array. One RAID group uses 4 disks. We are using RAID1. So out of 4, 2 disks are used for mirroring. So, out of the available 2 disks (18 * 2=36GB), we want to create one LUN which occupies the entire 36GB (spanning two disks). Our basic aim is not share a disk with different LUNs. We want to be sure that one disk is entirely in one LUN basically for performance reasons. With open-E emulation, we are able to create LUN of only 13GB this wasting the 23GB disk space.
2. What is the stripe size used in the striped LUNs? Can this be tuned in some way or is it fixed.
3. Is there a way to statically partition and allocate the cache in 9960 to a set of LUNs? Suppose, I have 2 GB cache, can I say that use all 2GB for a specific LUN?
4. Is there some way of creating RAID0 volumes in HDS9960?
1. The size of the volume you can create depends on the operating system you are using and the size of the drives in the array group. Since both EMC and HDS arrays support mainframe type volume sizes, we use the "open" volume sizes to create LUNs for "open" systems. HDS calls these "open" volumes "LDEVs," and EMC calls them "meta volumes."
On a 9960, you have the capability of creating a LUN size presented to an operating system of 36 Megabyte to 1.7 Terabytes in size. You can accomplish this task by virtualizing the "LDEVs" in the array. You can string together up to 36 LDEVs using a "LUSE" (Logical Unit Size Expansion) volume. You can also slice an LDEV into smaller sizes that act just like a "partition within a partition" down to 36MB using a "CVS" (Custom Volume Size) volume.
The reason you only have 2 drives available out of 4 in an array group when using RAID1 is HDS actually does RAID 0+1 (often called RAID10) when configuring RAID1. This is done for two reasons. Performance is better on RAID10, you can actually lose 2 drives in a RAID1 HDS array group without data loss as long as the drives are in separate mirrors. So the other reason is availability.
Now let's take your situation. You are using 18GB drives as RAID1 that leaves 36GB usable out of a 4-disk array group (which is really using RAID10). Under NT and Solaris, using 18GB drives and RAID1, the largest LDEV size is an "openE" which is approximately 14GB. So you can create 2 openE LDEVs, from the array and one CVS volume of 7GB. You can then use LUSE to virtually tie the LDEVs together and create 2 LUNs for the single server of 28GB and 7GB.
Just as an aside, you really can share an array group among a number of servers. The cache algorithms make it almost transparent to the server and if you use "cruise control," the array will automatically eliminate hot spots and tune itself for performance.
2. If I remember correctly, it's 64K fixed but tuned in cache. On a 9960 with cruise control you don't really need to worry about tuning to that level, let the box do it and free up your time! Cruise control will even dynamically move your data on the fly between RAID types in the box!
3. Yes, the software is called "prioritized port control" to enable cache preference.
4. RAID0 is automatically implemented when you create a RAID1 volume. The drives are mirrored first then striped. RAID0 by itself does not provide for data protection since no parity is generated.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in our Storage Networking discussion forum.
This was first published in February 2002