Don't miss all of the installments in this series on storage zoning:
If you read our previous series on SAN topology and switches, you'll probably turn your attention to SAN zoning. Zoning becomes quite important once your SAN has more than a couple dozen devices. Strangely enough, during the early days of SAN, there was even some discussion about whether zoning was important or needed enough to be a Fibre Channel SAN standard. Even now it is very much an evolving area in standards and implementation. So let's take a closer look.
What is SAN zoning?
The basic premise of zoning is to control who can see what in a SAN. There are a number of approaches broken down according to server, storage and switch. I will also talk about initiators and targets. On any server -- even NT -- there are various mechanisms to control what devices an application can see and whether or not the application can talk to another device. At the lowest level, an HBA's firmware and/or driver has a masking capability to control whether or not the server can see other devices. In addition, the operating system can be configured to control which devices it tries to mount as a storage volume. Finally, many people use extra-layered software for volume management, clustering and file system sharing, which can also control applications access.
For storage zoning, if you ignore JBODS and the earlier RAID subsystems on most disk arrays, there is a form of selective presentation. The array is configured with a list of which servers can access which LUNs on which ports and quite simply ignores or rejects access requests from devices that are not in those lists. In terms of switch zoning, most if not all Fibre Channel switches support some form of zoning to control which devices on which ports can access other devices or ports (I will talk more about this in more detail). One other category that controls access is the virtualization. But I will save that discussion for another day.
What type of SAN zoning should you use?
My simple advice is, broadly speaking, to use a little of each of these approaches. Control what devices/LUNs are mounted on the server using some operating system or software capability (i.e., do not use a mount-all approach). Use selective presentation on the storage array, and use zoning in the fabric. Why do I say this? Using a network analogy, you do not want a PC to hack into your files on your corporate systems. To prevent someone from doing this, you have access control lists on the files in the file systems. On the shares, you have firewalls, security gateways, packet filtering, etc. Each of these elements does a complementary and slightly different job in protecting your data.
How exactly does zoning work?
I have answered the question in its broadest sense. Now to be a bit more technically precise. In very simple terms, when a node comes up and connects to a fabric, the first really useful thing it does is a fabric logon. This is how the device gets its 24-bit address, which will be used for routing in the fabric (SID or DID usually refer to a source address or destination address of this form). The device already has its World Wide Name, or several as each port on a node or device will have a unique port WWN, usually programmed in hardware. There is also a node WWN that identifies the node or device, and should show up the same on each port. The next step occurs when a device logs on to the name server service in the SAN and registers itself. The SAN builds up a database of all the devices in the fabric using a mapping of the node and port WWNs to the 24-bit address as well as the capabilities of each device. This includes whether the device in is an FCP device -- one that talks SCSI commands over Fibre Channel.
Finally, a server will ask the name server to send back a list of what other FCP devices it can see in the fabric. This is where zoning kicks in. The name server only returns a list of those FCP devices that are in the same zone (or a common zone). In other words, I only find out about the devices I am supposed to know about.
The server, therefore, has a list of the 24-bit addresses of all the devices it is supposed to be able to see. It will then typically do a port logon to each one in turn to try and find out what sort of FCP/SCSI device it is. This is similar to normal SCSI where the SCSI controller/server does a scan of the bus and queries the properties of each device it can see on the bus.
That, in a nutshell, is zoning.
About the author: Simon Gordon is a senior solution architect for McDATA based in the UK. Simon has been working as a European expert in storage networking technology for more than 5 years. He specializes in distance solutions and business continuity. Simon has been working in the IT industry for more than 20 years in a variety or technologies and business sectors including software development, systems integration, Unix and open systems, Microsoft infrastructure design as well as storage networking. He is also a contributor to and presenter for the SNIA IP-Storage Forum in Europe.
Dig Deeper on SAN technology and arrays