This article can also be found in the Premium Editorial Download "Storage magazine: .NET server storage: Friendly or not?."
Download it now to read this article plus other related content.
Novice users who first start exploring SANs often believe they can simply buy off-the-shelf components, plug them in and instantly have a network which they can easily share and reallocate storage on. In this idealworld, users could automatically share files and directories between servers with no conflicts, and would easily be able to share files across platforms. Users would also be able to create new storage and share it among servers connected to the SAN - all through their operating system's disk management tools.
Unfortunately, that's not the case. In real life, trying to plug and play a storage network is inviting data disaster. The reality of storage networks is that careful administration must occur in order for everything to work smoothly. To add storage you can't just plug in, for example, an array and expect it to be available to the correct hosts or interoperate properly. The addition of a storage
Windows reigns supreme
Because storage networking technology was only developed recently, few operating systems are aware of the storage network, and assume that all the storage they see is captive to the device. SANs evolved from parallel SCSI roots, so most operating systems continue to use the outdated model of dedicated storage, treating the storage network as just a long SCSI bus. This leads to problems as those systems bump up against assumptions which no longer apply to shared storage.
One of the most often cited issues by administrators is the behavior of the Windows operating system. Windows systems write an identification label to every disk discovered by Disk Administrator, which is everything on a storage network. On a SAN with shared storage, this label will be written to disks that may be used by another operating system - for example, Solaris - often resulting in the corruption of that shared volume.
Don Whitlow, a storage administrator at Sussex, WI-based Quad/Graphics, manages two SAN islands of about 2TB each. He says, "Windows NT will clobber everything it sees on the SAN because it wants to own everything it can see." As a result, he says, "We have to carefully manage our network to prevent this from happening."
It's in the box
Luckily for users, the hardware available today offers a wealth of features which ease the management of the SAN. From storage arrays to HBAs and switches, SAN hardware has a wealth of features available to users to help manage their storage.
Whitlow says, "We use LUN-based masking at the HBA, along with switch zoning to control our SAN," which allows him to fully manage equipment with a minimal amount of trouble. "That's worked out really well for us." His site has three Compaq subsystems, several Compaq-branded 16-port switches and a variety of HBAs in IBM-AIX and NT boxes on the same SAN. "Either one of the techniques would work, however, we're using both together very successfully," he says.
Switch-based zoning is the ability of a Fibre Channel (FC) switch to limit access to storage devices from selected hosts. Through software or hardware, the switch prevents devices from seeing each other if they aren't part of the same zone. For example, you could create a zone that only allows Solaris hosts, or a zone which is limited to your production database cluster. Other devices won't be able to see your storage or your hosts.
Zones can be set up in one of three ways: The first - and simplest - is to restrict traffic between the physical ports of a switch. This is referred to as port zoning, and is used when users want to restrict physical connections to a switch, regardless of what host or piece of storage is attached in that location.
This was first published in August 2002