This article can also be found in the Premium Editorial Download "Storage magazine: .NET server storage: Friendly or not?."
Download it now to read this article plus other related content.
The second, most commonly used technique is setting up a zone based on the World Wide Port Name (WWPN) of the devices. The WWPN uniquely identifies every FC port in a SAN, and by specifying a zone by a WWPN, a port is always part of that zone - even if the topology and configuration of a network changes - or if the port is accidentally moved or plugged into another port on the network.
When using zoning and LUN masking techniques, managing network changes becomes critical, due to the need to keep zones and access lists up to date. For example, if you need to replace or add a new HBA into a server, the WWN of that HBA must be properly added to all zones and storage LUN access controls for storage to be seen by that HBA. This adds a significant amount of information which administrators need to keep in sync when components are added, replaced or removed.
Another issue to consider is when you are merging different fabrics. Because of the complexity of merging separate zoning definitions, sometimes a condition can occur called a segmented fabric - a fabric which will not merge because of conflicts between zones. This doesn't often occur, but when it does, it requires administrators to carefully check what definitions are correct, remove erroneous or incorrect information from zone sets, or ensure no zones are defined on one of the fabrics before the merge.
Even so, zoning and LUN masking can also be a powerful tool for managing changes. By implementing zoning and LUN masking, inadvertent changes can be prevented from happening in a network. Zoning and LUN masking can even be used to create a zone for new hardware, configure, test and prequalify equipment, before merging those zones with existing configurations and making them live.
Mark Antonaccio, a SAN designer working for an East Coast reseller says, "I like to use port-level zoning, combined with a dual redundant fabric. The ability to move from one port to another using World Wide Name [WWN] zoning isn't as important in a resilient design." Antonaccio relies extensively on Brocade's Fabric Manager to handle the storage networks he works on.
Whitlow uses the zoning built into his Compaq switches (OEM'd from Brocade) as well. "I create a zone for my AIX boxes and a separate one for my NT boxes to prevent NT from clobbering drives," he says.
Zoning is limited, however, in its ability to manage at the LUN level. Most switch zones are unable to limit access to individual LUNs on a large storage array, although some vendors are starting to add this capability to their hardware. This limitation means that arrays, even though available in a SAN, still end up inefficiently partitioned - particularly for large storage arrays. Plus, access can't be limited to hosts which need access to the same array port, but not the same logical unit. This leads to the use of LUN masking at either the storage device or at the HBA.
Controlling access at the array
Storage-based LUN masking allows for finer-grained control over storage allocation, down to the LUN level. Using onboard controls on storage arrays, storage-based LUN masking can ensure users never access the wrong storage, and can guarantee there will never be accidental data corruption.
At the storage array, hardware or software ensures that the WWPN of the port - which has been given access to a LUN - is in the access list that has been set up by the administrator. When a command is sent to the device, it will be rejected unless there's a match to the inquiring device.
Most of the high-end storage arrays offer LUN masking. Whitlow uses storage-based LUN masking on his Compaq StorageWorks arrays to control allocation of storage to different hosts. He says, "We use LUN masking to implicitly say a LUN can see a specific connection, specified by World Wide Name." His ESA10000 and EMA10000 boxes provide control over which hosts can see a specific storage LUN. Other high-end storage arrays - such as EMC and Hitachi - also offer similar control over LUN access.
Storage array-based LUN masking is either controlled from a GUI or by a command line. Generally, LUNs are configured, exported and made available to specific WWNs representing the HBAs in a host. Other hosts which try to access those volumes will be unable to get to the data. LUN masking also allows, in many cases, for presenting different LUN numbers to different hosts - for example, presenting every LUN as LUN 0 to hosts to enable booting from the SAN.
The main limitation of storage-based LUN masking is that it's only available on high-end arrays, which typically cost a minimum of several hundred thousand dollars; and isn't available on low-end JBOD systems. In addition, these solutions tend to be difficult to manage - frequently requiring a visit from a local service representative to change and modify settings - not the ideal for easy allocation and management of storage.
Access management at the host
HBA-based LUN masking is another option for controlling access to storage in the SAN. By using the HBA's persistent binding or LUN masking features, you can bind a specific LUN to a host and mask off other volumes in the SAN, preventing access to volumes which belong to other hosts. As with switch-based zoning and storage-based LUN masking, this helps in the allocation of storage in the SAN, particularly in the case where JBODs or storage without storage-based LUN masking is being used. The HBA's driver software works by restricting which volumes are exposed (bound) to the operating system. Only LUNs which have been unmasked and bound to a host are reported to the operating system, so other LUNs can never be claimed or written to by the OS.
Through software or files, specific WWNs and LUN combinations are included in the list of storage to access. These settings usually take effect on reboot, although some cards allow for real-time changing of these LUN masking settings.
The main limitation of HBA-based LUN masking is that it requires the participation of all of the servers on a SAN. This means that a rogue host or incorrectly configured host can easily corrupt data or cause other problems. With many hosts in the SAN, it gets exponentially more difficult to manage the number of hosts attached to the SAN without some central control.
With all of these various techniques required to keep a SAN up and running, you'd think users would jump on any solution to simplify management of the many components in their networks. However, when users learn these techniques, they like them. Quad/Graphics' Whitlow says, "To tell you the truth, the switch's built-in zoning is great, and command line-based control of the storage works fine. Until a tool comes along that is both affordable and performs as well as the built-in tools, we're not looking at other options." He says he's looked at a couple of management solutions to help out with SAN management, but so far, they're too expensive, and "We're not willing to bet our business on it."
In fact, resellers and vendors report that most storage is still bought and sold today based on the features of the hardware. Software and management are usually thought of after the fact, if at all. "We find that users only think about management as an afterthought," says Antonaccio.
This was first published in August 2002