Problem solve Get help with specific problems with your technologies, process and projects.

# SAN topologies, part 1: Know your switch options

## What size switch to use when designing a SAN.

Storage area network (SAN) topologies are vital to choosing a switch vendor: When do you use big switches (directors...

or core switches), when do you use small switches (edge or departmental switches) and should you use a mixture (core-edge design)? Part 1 of this series helps you first determine what size switch is needed. Part 2 discusses how to design your network using different SAN topologies.

Most typical switch vendors can sell you switches with 8, 16, 32 and 64 ports. Sometimes you'll find other mid-size switches -- 24 or 48 ports -- and other times you'll find larger switches -- 128, 140 and 256 ports. So let's start by making some assumptions and posing some questions.

Assumptions

• In most cases, the switch manufacturer makes small switches (32 ports and below) and large switches (64 ports and above).

• Small switches cost less per port than large switches; it is harder to design a big switch than a small switch. But keep in mind that you lose ports when connecting switches together.

• Most SAN switches today are 2G bit/sec

Questions

• How large a switch makes sense?

• How large a SAN can or should I build from small switches?

• For a small SAN, should I use a single large switch rather than a collection of small switches?

• For a large SAN, why would I use a core-edge design rather than core switches only?

There are many different topologies that can be used for interconnecting switches. For this discussion, let's assume we have a SAN contained within one data center.

How big a switch?

Let us remember that SAN stands for storage area networking. If SANs make sense, than bigger SANs (up to a point) make more sense. Therefore, no matter what size box we buy, at some point we will start networking them together. This poses a challenge: A fatter pipe is needed to connect to large SAN boxes.

Let's do some math. One basic assumption when designing SANs is that we are consolidating storage; we are sharing a disk array with multiple servers and sharing each port of the disk array to multiple servers. Typically, three to six servers per port on the disk array is considered reasonable. For most servers, traffic I/O's are more important than bandwidth, and it is not that often we exceed 20% to 30% bandwidth on a server HBA.

When connecting 1g switches together, the 8-, 16-, and 32- port switches can be readily networked without hitting performance problems and without having to worry about which devices are talking together. When we get to 1g 64-port switches, it becomes very hard to design a SAN with significant amounts of data moving switch-to-switch. So I would suggest at 1g, no more than 32 ports makes sense.

What about 2g switches? We have to start from a simple fact -- a server with a 2g HBA does not give me two times the bandwidth of a 1g HBA. It will be a bit faster, it will give more I/O's per second, but this can be attributed to the HBA being newer and more intelligent. Having made this assumption, we now find that we can interconnect switches of 64 and maybe 128 ports in a reasonable fashion using 2g ISLs, particularly if we have good load balancing or trunking. However, even at 128 ports, we have to start thinking about localizing traffic. Any larger and life gets very difficult.

Why not make my switches even bigger? Things start to get really tough! The bigger the switch the more it will cost per port, assuming that it is a true single switch with non-congesting performance for any-to-any-port connectivity with all the ports running full speed. You certainly can argue that we do not need such a switch design. Afterall, servers do not/can not actually use all the bandwidth of 1g let alone 2g ports. By definition, we are over-subscribing connections on the disk arrays, and so on.

If you look at the IP network world, we know not all switches are equal. We choose whether or not to pay more for a switch that runs more I/O's per second. In addition, there is a limit to how large most people are comfortable taking a single fabric. Without some way of splitting a SAN into separate subnets for manageability, we find that big SANs can be challenging: They can be difficult to manage with the current state of management tools, they can lead to scalability concerns, etc. Is the limit 500 ports? 100 ports? It's hard to say.

My favorite point though is cabling. Unless you are lucky and have lots of structured optical cabling throughout your data center, then having one big switch in the middle of the room can be a cabling nightmare. Whereas, having a number of switches in different locations in the data center, we can consolidate the cabling, reduce cable complexity and have a more usable physical environment. While we talk about heterogeneous SANs, there can be advantages to some level of homogeneous design, such as having all the Microsoft servers connected to one switch, all the UNIX servers to another.

In my next tip, I will discuss how to build a network using different SAN design topologies.

About the author: Simon Gordon is a senior solution architect for McDATA based in the UK. Simon has been working as a European expert in storage networking technology for more than 5 years. He specializes in distance solutions and business continuity. Simon has been working in the IT industry for more than 20 years in a variety or technologies and business sectors including software development, systems integration, Unix and open systems, Microsoft infrastructure design as well as storage networking. He is also a contributor to and presenter for the SNIA IP-Storage Forum in Europe.

This was last published in January 2003

## Content

Find more PRO+ content and other member only offers, here.

#### Start the conversation

Send me notifications when other members comment.

## SearchDisasterRecovery

• ### Tintri replication speeds agriculture firm's backup, restore

Tintri hybrid flash enables Life-Science Innovations to complete replication every night for all of its servers. The system can ...

• ### Don't let your BC/DR plan get lost in the shuffle

How can you maintain consistency in your BC/DR strategy in a world of constantly evolving technology? If it's a reactive part of ...

• ### Zerto Virtual Replication reduces marketing firm's RPO

Maritz needed a short RPO and flexible cloud provider options for its recovery process. Zerto replication protects applications ...

## SearchDataBackup

• ### Modern data backup technologies afford merging opportunities

Through technologies such as copy data management, vendors are merging primary and secondary storage. There's more convergence ...

• ### Criteria for vetting appliance-based data backup systems

Knowing the right questions to ask when vetting data backup appliance vendors can help ensure you select the product that will ...

• ### Spanning Backup for Salesforce enhances metadata restore

Spanning Backup makes it easier for Salesforce administrators who deal with metadata. Direct restore improves self-service of ...

## SearchConvergedInfrastructure

• ### Converged secondary storage for data protection carries risk

Using converged infrastructure for the secondary storage in your data protection plan can be valuable, but you need to weigh the ...

• ### Pivot3 vSTAC HCI helps Charleston airport IT fly

The Charleston airport IT team expanded its Pivot3 implementation from video surveillance to its main infrastructure stack after ...

• ### Ten hyper-converged infrastructure architecture buying mistakes

Buying hyper-converged infrastructure systems can be less stressful if you learn about and avoid these 10 common mistakes that ...

Close