Problem solve Get help with specific problems with your technologies, process and projects.

# SAN topologies, part 1: Know your switch options

## What size switch to use when designing a SAN.

Storage area network (SAN) topologies are vital to choosing a switch vendor: When do you use big switches (directors...

or core switches), when do you use small switches (edge or departmental switches) and should you use a mixture (core-edge design)? Part 1 of this series helps you first determine what size switch is needed. Part 2 discusses how to design your network using different SAN topologies.

Most typical switch vendors can sell you switches with 8, 16, 32 and 64 ports. Sometimes you'll find other mid-size switches -- 24 or 48 ports -- and other times you'll find larger switches -- 128, 140 and 256 ports. So let's start by making some assumptions and posing some questions.

Assumptions

• In most cases, the switch manufacturer makes small switches (32 ports and below) and large switches (64 ports and above).

• Small switches cost less per port than large switches; it is harder to design a big switch than a small switch. But keep in mind that you lose ports when connecting switches together.

• Most SAN switches today are 2G bit/sec

Questions

• How large a switch makes sense?

• How large a SAN can or should I build from small switches?

• For a small SAN, should I use a single large switch rather than a collection of small switches?

• For a large SAN, why would I use a core-edge design rather than core switches only?

There are many different topologies that can be used for interconnecting switches. For this discussion, let's assume we have a SAN contained within one data center.

How big a switch?

Let us remember that SAN stands for storage area networking. If SANs make sense, than bigger SANs (up to a point) make more sense. Therefore, no matter what size box we buy, at some point we will start networking them together. This poses a challenge: A fatter pipe is needed to connect to large SAN boxes.

Let's do some math. One basic assumption when designing SANs is that we are consolidating storage; we are sharing a disk array with multiple servers and sharing each port of the disk array to multiple servers. Typically, three to six servers per port on the disk array is considered reasonable. For most servers, traffic I/O's are more important than bandwidth, and it is not that often we exceed 20% to 30% bandwidth on a server HBA.

When connecting 1g switches together, the 8-, 16-, and 32- port switches can be readily networked without hitting performance problems and without having to worry about which devices are talking together. When we get to 1g 64-port switches, it becomes very hard to design a SAN with significant amounts of data moving switch-to-switch. So I would suggest at 1g, no more than 32 ports makes sense.

What about 2g switches? We have to start from a simple fact -- a server with a 2g HBA does not give me two times the bandwidth of a 1g HBA. It will be a bit faster, it will give more I/O's per second, but this can be attributed to the HBA being newer and more intelligent. Having made this assumption, we now find that we can interconnect switches of 64 and maybe 128 ports in a reasonable fashion using 2g ISLs, particularly if we have good load balancing or trunking. However, even at 128 ports, we have to start thinking about localizing traffic. Any larger and life gets very difficult.

Why not make my switches even bigger? Things start to get really tough! The bigger the switch the more it will cost per port, assuming that it is a true single switch with non-congesting performance for any-to-any-port connectivity with all the ports running full speed. You certainly can argue that we do not need such a switch design. Afterall, servers do not/can not actually use all the bandwidth of 1g let alone 2g ports. By definition, we are over-subscribing connections on the disk arrays, and so on.

If you look at the IP network world, we know not all switches are equal. We choose whether or not to pay more for a switch that runs more I/O's per second. In addition, there is a limit to how large most people are comfortable taking a single fabric. Without some way of splitting a SAN into separate subnets for manageability, we find that big SANs can be challenging: They can be difficult to manage with the current state of management tools, they can lead to scalability concerns, etc. Is the limit 500 ports? 100 ports? It's hard to say.

My favorite point though is cabling. Unless you are lucky and have lots of structured optical cabling throughout your data center, then having one big switch in the middle of the room can be a cabling nightmare. Whereas, having a number of switches in different locations in the data center, we can consolidate the cabling, reduce cable complexity and have a more usable physical environment. While we talk about heterogeneous SANs, there can be advantages to some level of homogeneous design, such as having all the Microsoft servers connected to one switch, all the UNIX servers to another.

In my next tip, I will discuss how to build a network using different SAN design topologies.

About the author: Simon Gordon is a senior solution architect for McDATA based in the UK. Simon has been working as a European expert in storage networking technology for more than 5 years. He specializes in distance solutions and business continuity. Simon has been working in the IT industry for more than 20 years in a variety or technologies and business sectors including software development, systems integration, Unix and open systems, Microsoft infrastructure design as well as storage networking. He is also a contributor to and presenter for the SNIA IP-Storage Forum in Europe.

This was last published in January 2003

## Content

Find more PRO+ content and other member only offers, here.

#### Start the conversation

Send me notifications when other members comment.

## SearchSolidStateStorage

• ### Hybrid and AFA storage array types battle for dominance

The debate still rages over whether all-flash arrays will make hybrid arrays obsolete, and cost is not all that matters when ...

• ### Storage tiers: SAS interface, MLC flash rising

SAS vs. SATA battle in enterprise data storage shows hard drives with SAS interface trending up; cheaper MLC SSDs outnumber SLC ...

## SearchCloudStorage

• ### Cloud survey uncovers seven storage services trends

Latest TechTarget cloud survey finds cloud backup, cloud file sync and share, disaster recovery and archiving are most popular ...

• ### Survey finds cloud storage implementation growing but cautious

Cloud storage implementation by users ranges from backup and DR to tiering. We reveal the most-used cloud storage applications ...

Assistant Editor Rachel Kossman tweets links to our content, as well as analysis from industry experts. Interact with her, let us...

## SearchDisasterRecovery

Any governance, risk management and compliance program should work with an organization's business continuity process. There are ...

• ### Frost Science Museum IT DR planning braced for worst, survived Irma

The Frost Museum of Science on the water in Miami braced for Hurricane Irma with a solidified storage and data center ...

• ### Using virtualized disaster recovery to fight ransomware

When it comes to fighting and protecting against ransomware, there are some major features of virtual disaster recovery that top ...

## SearchDataBackup

• ### The role of a GDPR data protection officer

The data protection officer, required for many organizations to be compliant with the EU's GDPR, will face challenges in the new ...

• ### Veeam acquisition of N2WS enhances cloud protection

Veeam will integrate N2WS technology into the Availability Platform. The companies' research and development teams will work ...

• ### Data backup and recovery software: 2017 Products of the Year finalists

The backup and disaster recovery software and services finalists include established vendors and newer entrants, offering a wide ...

Close