Networking questions about servers and switches in a SAN, Part 2
We have about 7.5TB Dell/EMC FC4700 directly attached to a single W2K server with dual HBAs with about 3-4TB getting added per year. We are thinking about getting a switch (or two for redundancy) to put the LTO tape library that is currently directly attached to the same server on the SAN to make backup throughput rate tolerable. Here are several questions:
1. Is it a bad idea to have one server own all of the LUNs on the FC4700 -- for a variety of reasons -- single point of failure too much disk space to manage for one server, etc.?
2. Does implementing switches sound justified just to relieve the unacceptably slow backups? Right now the throughput rate is about 3-4MB/s with hardware compression -- LTO drives are rated 15MB/s WITHOUT compression.
3. If switches are implemented and more servers are added, can the same
LUN be available to more than one server without using special software to create snapshots and clones? For example, if there is a file server assigned to LUN1 and a SQL server assigned to LUN2, can the SQL server access the files in LUN1? Can two nodes in an active cluster access the same LUN? What part of SAN provides that capability -- zoning of the switches?
Click here for Part 1
"...if there is a file server assigned to LUN1 and an SQL server assigned to LUN2, can the SQL server access the files in LUN1?"
Yup, but over IP not through the SAN unless you use something like SANergy from IBM. A SAN file system, in IBM's SANergy implementation anyway, will allow CIFS or NFS metadata access to SAN based files through a metadata server. Basically, requests for access for data go over an IP connection, while access to the data itself is re-directed through the SAN connection at SAN speeds. Since each server in the SAN has both a SAN connection and an IP connection to the metadata server nd SAN client can use CIFS or NFS metadata calls to request access and share files based in the SAN.
"Can two nodes in an active cluster access the same LUN? What part of SAN provides that capability - zoning of the switches?"
Yes but not at the same time (at least in a W2K cluster except for the quorum disk resource) LUN security in the storage array allows access to the same LUN in the SAN at a hardware level. This is done by assigning LUN access to the World Wide Names for the Host Bus Adapters in each server. Say for example you have two servers with two HBAs in each server. Using LUN security you would assign access to all four WWNs (two for each server) to the LUN in question. On NT (W2K) you need to be REAL careful when doing this since each server will want to write its own signature on the same LUN. In an NT (W2K) cluster you create a disk resource on the first node that is installed in the cluster. When another node is added to the cluster, it queries the shared Quorum disk, where the registry for the cluster is located. As long as your LUN security is set up right in the SAN, the disk resource ownership will be able to fail over between all cluster nodes, with one node being the "primary" owner for that resource and the others being failover nodes for that resource. The MSCS cluster software provides the lock management needed for the shared LUNs in the SAN.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our
This was first published in October 2003