In an effort to put storage area network (SAN) zoning in a common light, I've extracted a definition of zoning...
from the city planning offices of New York City. Because of its population densities, NYC zoning offices offers sophisticated methods for managing space on its land mass. And in principle, the concept of zoning land is much the same as zoning in a SAN, according to the following definition: "Through zoning, a city regulates building size, population density and the way the land is used and accessed. Zoning recognizes the changing demographic and economic conditions of the city and is a key tool for carrying out its planning policies."
Although similar in principle and in practice, zoning in the SAN is different from city planning. SAN zoning is the act of partitioning Fibre Channel (FC) devices into management realms for the purpose of secured communication between an initiator and a target on a public SAN. Through zoning, each FC device becomes part of a community of devices that only respond to each other and the management interfaces of the fabric.
Zoning keeps initiators in your SAN honest by allowing access only to authorized targets. Although Windows-based servers are likely to put their stamp on every target they see, you still wouldn't want to implement a homogenous SAN populated with Unix-based servers without zoning. Clustering requires other considerations because multiple initiators (hosts) need access to the same targets--or LUNs--in case of failover.
LUN masking is a form of zoning usually implemented in an enterprise storage array, but can also be implemented at the host level or a third-party virtualization product. Your storage array will likely have more than one disk drive sitting behind one or more storage ports. As a result, you can provision disk drives or LUNs to more than one host through a single storage port on an array with multiLUN support. Therefore, controls must be in place to control access to the LUNs from multiple host sources. LUN masking provides this control by hiding LUNs from the host connections not meant to access the specified LUN. Depending on the capabilities of your storage array--more specifically if the disk drives are native FC devices and thus have WWNs--you may be able to simply use the software zoning features in your switch to provide this service.
With software zoning, zone member WWNs are usually aliased to an easily referenced name. This way, the SAN admin only needs to recall this name when referencing the zone member, instead of having to recall the member's 64-bit WWN. Also ensure your naming convention is adaptable across applications and operating systems. This helps trainees move between SAN applications and processes with a greater understanding of host-to-device relations.
In addition to keeping sanity in your SAN, partitioning servers (initiators) with their related storage targets enhance the discovery process during the bootstrap process by limiting protocol communication to only the devices in the zone, instead of all of the devices in the SAN. Security is another benefit of zoning, as will be detailed later. Establishing a defensive perimeter around communicating entities enhances data integrity, thereby providing your applications with a higher service level.
There are two types of zoning available in your SAN--software and hardware. Software zoning is the practice of grouping and identifying end nodes in the nameserver by entering their WWNs in its database. Only these nodes will be permitted to gain access to the group or zone and initiate communication with other zone members. This gives the admin greater flexibility in the movement of end nodes between ports and/or switches in the fabric because the nameserver database is synchronized and distributed across the fabric to each switch. Therefore, no matter what port a host bus adapter (HBA) is plugged into in the fabric, it will can query the nameserver to inquire about the devices in its community.
But, using this method, someone could ascertain the WWN of a legitimate host in the nameserver database and masquerade as that host from a port other than the one the compromised HBA is connected, gaining access to the legitimate host's community as well as in other communities if a particular device is shared across zones.
With hardware zoning, nodes are identified by their domain/port number pair, and access to the SAN is based only on location. In order to circumvent hardware zoning, the undesirable needs physical access to the port. This implies a certain amount of inherent security. However, with this increase in security comes more management overhead because changes to the nameserver database are necessary to mark the change in the port and possibly domain numbers. Yet this process is so simple, there isn't any real reason not to strive toward hardware zoning for its inherent security benefits, especially because port relocations in a SAN are less likely to occur than in a LAN environment where moves are more likely practiced.
Although the domain/port pair can identify the incoming node, the WWN is still a referenced characteristic of the end node. Surely an API developer can use this information to update the record in the nameserver database to reflect the location change of the end node, and notify the administrative community through a raised dialogue box in their network monitoring application.
Zoning is essential to ensuring that only authorized, predetermined nodes gain access to the management realm that makes up its community. Nodes are authorized by being entered into a DNS-like array using its WWN or domain/port pair to identify itself to the nameserver for acceptance into the fabric. Once accepted, the node becomes part of the larger community and communication can begin. However, because zoning is for the most part implemented with software, consider its circumvention as inevitable to your overall security practices, and not rely completely on zoning for security in your SAN.
In terms of location in the security layers, zoning fits above physical access and between any access lists situated at the port level and/or at the service layer of your fabric. Zoning isn't an all-encompassing security measure, however, the residual affect of partitioning nodes on your SAN is one of heightened security and protection from unauthorized nodes.
Zones should be as small as possible, only including the hosts and storage that are related by an instance of an application. For example, if you're running three Oracle instances on an enterprise class server, you should configure three zones that identify the initiator(s) and targets associated with each Oracle instance. In this way, SAN events occurring in one zone and ultimately one Oracle instance won't propagate to the other Oracle instances. State change notifications (SCNs) are sent to each member of a zone that has registered to receive such notifications. And depending on the node's configuration and error recovery mechanics, the node will respond to the SCN in a manner consistent with these variables, possibly resetting the end device and causing alarm to the upper layer application.
In this same configuration, each Oracle instance will need to be protected by backup during the course of the day. However, each instance may have different availability requirements, requiring sliding schedules in your backup routine. If you changed zoning configurations on the fly to accommodate backup strategies, each zone member must process the SCN and log into the nameserver to find out what changed. Therefore, to ensure that non-related SAN devices aren't affected by the management of the other segregate zone members based on their application relations and your administrative processes.
Application correlation in your network management applications is another beneficiary of good zone planning. Wouldn't it be nice to have an association between an intermittent, an application write error and a pop-up dialogue box in your management application? If you plan your zones with an eye towards the management processes (well-known processes) of your SAN as well as your organization, it's possible.
Dig Deeper on SAN technology and arrays