Problem solve Get help with specific problems with your technologies, process and projects.

Zoning a large-scale SAN

I work in an environment where storage devices greatly outnumber hosts --80+ Sharks, 80+ San Data Gateways for SCSI-attached tape drives and approximately 50 RS/6000 hosts with two HBAs apiece. Since each host is required to be able to see each Shark (whether it uses it or not), and each host must have four paths through the fabric(s) to see each Shark, you can see how this has created an unwieldy zone set. Currently 9000+ port assignments and growing.

We currently use what I call a "host-centric" zoning methodology, where each zone contains the host HBA port and all attached storage HBA ports. Would there be a problem with switching to a "storage device-centric" methodology, where each zone contained the domain/ports of the storage device, and the domain/ports of all hosts that need to be able to see it?

Sounds like a pretty large SAN. 80+ Sharks? My large-scale SAN customers use some pretty ingenious methods of management when it comes to zoning. One method is to use port zoning, instead of WWN zoning. The port information for every server and every storage port is kept in an Oracle database. When it comes time to change zone information for things like modifying who has access to shared tape libraries at backup time, they kick off a script that queries the zone data in the database.

The script then uses that information to build the commands needed to rezone all the affected switches and zaps the switches with the new zone information. Since zone changes do not require a switch reboot, everything just happens automatically. This prevents other backup software from accessing the shared libraries by mistake during the preferred backup jobs. (NT backup uses NT backup software and Unix backup uses Unix backup software in this environment.

You can also build something similar to this using a Perl script, although I thought the Oracle method was kind of cool.

Using a storage-centric rather than host-centric zone method requires that the storage subsystem be able to provide LUN security access. If the subsystem uses LUN security, then zoning is not even needed! Once you plug a host into a fabric, it can only see the devices in the subsystem that have been allocated to that hosts WWNs.

This is a more granular approach to SAN security than just zoning alone. Used in conjunction with zoning, subsystem-based LUN security can add an extra layer of protection that can prevent "accidents" from happening. By accidents, I mean a server getting access to a LUN that does not belong to it. This is especially important when Windows operating systems are being used in the SAN. Windows wants to own everything it sees and can make your Unix admins crazy by writing a signature that wipes out the Unix volume.

Either a host centric or storage centric approach can be used for zoning. It's just a matter of preference really, as long as your storage subsystems support LUN security at the LUN level. If you use a storage centric approach without LUN level security in the array, and have more than one server per storage port, your servers may be able to see each other's LUNs on that port. Unless your using clusters, this is a BAD thing.

With that many subsystems, I would check with IBM before you go around messing with all your zones. See what they believe is "best practice" for the Shark.


Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our .bphAaR2qhqA^0@/searchstorage>discussion forums.

Dig Deeper on SAN technology and arrays

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.