There is one aspect of SAN design that is in many ways vital when choosing a switch vendor -- SAN topologies. When do you use big switches (directors or core switches), when do you use small switches (edge or departmental switches), and should you use a mixture (core-edge design)? Part 1
of this series helped you determine what size switch is needed. Part 2 discusses how to design your network using different SAN topologies.
So, you've determined how big a switch you need. Now to decide what topology to use when designing your SAN.
If you've chosen just small switches, the full mesh is the simplest topology to use, understand and manage. In this case, every switch is connected to every other switch.
In reality, while a network provides any-to-any connectivity, not every device in a network needs good connectivity to every other device. However, building a full mesh where there is good any-to-any connectivity does mean that you can pretty much ignore locality. You can just plug any device in anywhere knowing that there is enough bandwidth.
The downside is that a full mesh is only really practical for up to four or five switches. Otherwise, all of the ports are used for interswitch links with no user ports. Using 16-port switches you could build a 17-switch full mesh that would look very nice and have no free ports. In this case, a full mesh will typically take up to 40 to 50 user ports, which is good because 32- and 64-port switches are available if you need to go large.
If you want or need to build a larger SAN with small switches, then the most common design would be core-edge. Typically, you would start with two core switches and connect edge switches to both core switches. Depending on your bandwidth requirements, using 16-port switches you can connect 16 edge switches together off the core creating 200 usable ports.
The main design issue with this approach is that the core is purely used to interconnect the edge. This means that the servers and storage are all connected to the edge and, typically, we start to consider localization.
In a design like this, if we had a large disk array with say 16 ports, eight of those ports would be connected to this fabric while the other eight ports would be connected to a separate fabric. This means one port is connected to each of eight switches. So when allocating storage, we are looking at which disk array ports have spare bandwidth and I/O capacity as well as which port is connected to the switch with the server that needs the space. Similarly, if we are using smaller arrays with only a few ports each, then we hope the disk array on the same switch as the server needing storage has bandwidth, I/O capacity and spare space.
I do not think we should focus too much on localization. In reality, this sort of core-edge design has, worst case, two ISLs and three switches between server and storage, and has fairly good end-to-end bandwidth.
However, if we build a core-edge SAN using large switches at the core and small switches at the edge, we get two big advantages over a core-edge design using small switches. First, we can easily build the SAN out to 500, 1000 or more ports. Second, if we put servers at the edge and storage at the center, then the environment becomes very easy to manage and understand. No matter how we allocate storage to the servers, the traffic always goes through exactly two switches and one ISL. Assuming we have a sensible number of ISLs from each switch (easy to do with 2g), then we have ample bandwidth. Therefore, allocating storage is simple and localization is not a consideration.
Looking at cable consolidation, as I discussed in Part 1, most data centers have racks of servers. So a 42u rack might have some 20 2u Windows NT servers each with two Fibre Channel HBAs. Or, it might have just one or two high-end servers, each with five or 10 HBAs. Either way, an edge switch in the server rack consolidates the cabling back to the core and the storage.
But why, I hear you all ask, do we not just build a SAN from multiple large switches only? For one -- cabling. Most data centers have server racks and storage racks. This means that a core-edge design, using small switches for the servers connecting back to large switches, simply makes cabling easier in your average data center. Unless of course you already have massive amounts of structured optical cabling in place.
Another consideration is cost. Small switches cost less per port, so a core-edge design may well reduce the average-cost-per-user port. Depending on your environment, this may be more or less critical. In a Wintel environment, the cost of a Fibre Channel port as a proportion of the cost of the server may be quite high, whereas in a Unix environment this may be less of an issue.
Then, of course, you could reuse small switches that have been purchased over the last few years. Even if you are starting to deploy SANs now, you may feel you want to start with smaller switches to dip your toe in the water.
That being said, there are still some cases where I see SANs constructed using only large switches.
There is no one topology or approach that applies to everyone. Always keep in mind that this is a network. It will grow over time. So when choosing a small or large switch, and a topology, think about the long-term implications. In any environment, you already have a lot of servers and storage, you probably know what types of servers you will typically purchase in the future and you can probably make a good guess as to what systems would make sense to incorporate in the SAN. Knowing these parts, you can fairly easily build out your SAN.
About the author:
About the author: Simon Gordon is a senior solution architect for McDATA based in the UK. Simon has been working as a European expert in storage networking technology for more than 5 years. He specializes in distance solutions and business continuity. Simon has been working in the IT industry for more than 20 years in a variety or technologies and business sectors including software development, systems integration, Unix and open systems, Microsoft infrastructure design as well as storage networking. He is also a contributor to and presenter for the SNIA IP-Storage Forum in Europe.
This was first published in January 2003