Problem solve Get help with specific problems with your technologies, process and projects.

# SAN topologies, part 1: Know your switch options

## What size switch to use when designing a SAN.

Storage area network (SAN) topologies are vital to choosing a switch vendor: When do you use big switches (directors...

or core switches), when do you use small switches (edge or departmental switches) and should you use a mixture (core-edge design)? Part 1 of this series helps you first determine what size switch is needed. Part 2 discusses how to design your network using different SAN topologies.

Most typical switch vendors can sell you switches with 8, 16, 32 and 64 ports. Sometimes you'll find other mid-size switches -- 24 or 48 ports -- and other times you'll find larger switches -- 128, 140 and 256 ports. So let's start by making some assumptions and posing some questions.

Assumptions

• In most cases, the switch manufacturer makes small switches (32 ports and below) and large switches (64 ports and above).

• Small switches cost less per port than large switches; it is harder to design a big switch than a small switch. But keep in mind that you lose ports when connecting switches together.

• Most SAN switches today are 2G bit/sec

Questions

• How large a switch makes sense?

• How large a SAN can or should I build from small switches?

• For a small SAN, should I use a single large switch rather than a collection of small switches?

• For a large SAN, why would I use a core-edge design rather than core switches only?

There are many different topologies that can be used for interconnecting switches. For this discussion, let's assume we have a SAN contained within one data center.

How big a switch?

Let us remember that SAN stands for storage area networking. If SANs make sense, than bigger SANs (up to a point) make more sense. Therefore, no matter what size box we buy, at some point we will start networking them together. This poses a challenge: A fatter pipe is needed to connect to large SAN boxes.

Let's do some math. One basic assumption when designing SANs is that we are consolidating storage; we are sharing a disk array with multiple servers and sharing each port of the disk array to multiple servers. Typically, three to six servers per port on the disk array is considered reasonable. For most servers, traffic I/O's are more important than bandwidth, and it is not that often we exceed 20% to 30% bandwidth on a server HBA.

When connecting 1g switches together, the 8-, 16-, and 32- port switches can be readily networked without hitting performance problems and without having to worry about which devices are talking together. When we get to 1g 64-port switches, it becomes very hard to design a SAN with significant amounts of data moving switch-to-switch. So I would suggest at 1g, no more than 32 ports makes sense.

What about 2g switches? We have to start from a simple fact -- a server with a 2g HBA does not give me two times the bandwidth of a 1g HBA. It will be a bit faster, it will give more I/O's per second, but this can be attributed to the HBA being newer and more intelligent. Having made this assumption, we now find that we can interconnect switches of 64 and maybe 128 ports in a reasonable fashion using 2g ISLs, particularly if we have good load balancing or trunking. However, even at 128 ports, we have to start thinking about localizing traffic. Any larger and life gets very difficult.

Why not make my switches even bigger? Things start to get really tough! The bigger the switch the more it will cost per port, assuming that it is a true single switch with non-congesting performance for any-to-any-port connectivity with all the ports running full speed. You certainly can argue that we do not need such a switch design. Afterall, servers do not/can not actually use all the bandwidth of 1g let alone 2g ports. By definition, we are over-subscribing connections on the disk arrays, and so on.

If you look at the IP network world, we know not all switches are equal. We choose whether or not to pay more for a switch that runs more I/O's per second. In addition, there is a limit to how large most people are comfortable taking a single fabric. Without some way of splitting a SAN into separate subnets for manageability, we find that big SANs can be challenging: They can be difficult to manage with the current state of management tools, they can lead to scalability concerns, etc. Is the limit 500 ports? 100 ports? It's hard to say.

My favorite point though is cabling. Unless you are lucky and have lots of structured optical cabling throughout your data center, then having one big switch in the middle of the room can be a cabling nightmare. Whereas, having a number of switches in different locations in the data center, we can consolidate the cabling, reduce cable complexity and have a more usable physical environment. While we talk about heterogeneous SANs, there can be advantages to some level of homogeneous design, such as having all the Microsoft servers connected to one switch, all the UNIX servers to another.

In my next tip, I will discuss how to build a network using different SAN design topologies.

About the author: Simon Gordon is a senior solution architect for McDATA based in the UK. Simon has been working as a European expert in storage networking technology for more than 5 years. He specializes in distance solutions and business continuity. Simon has been working in the IT industry for more than 20 years in a variety or technologies and business sectors including software development, systems integration, Unix and open systems, Microsoft infrastructure design as well as storage networking. He is also a contributor to and presenter for the SNIA IP-Storage Forum in Europe.

This was last published in January 2003

#### Start the conversation

Send me notifications when other members comment.

## SearchDisasterRecovery

• ### Ten business continuity risks to monitor in 2018

Business continuity and disaster recovery threats vary by organization, but common threads can be found across the globe. Ten ...

• ### Take a chance on virtualized backup and disaster recovery

Virtualization has become a popular data protection tool and is now changing the way organizations back up their data and recover...

• ### JetStream Cross-Cloud Platform takes flight for DR, management

New vendor JetStream Software incorporated technology from flash caching developer FlashSoft and worked with VMware on its ...

## SearchDataBackup

• ### Actifio backup makes the grade for digital learning provider

With cyberattacks a constant threat, Actifio provides protection through a data-centric approach and an air gap. Customers can ...

• ### Kaseya backup takes on Office 365 with Spanning's help

Kaseya Office 365 Backup is integrated deeply with the vendor's remote monitoring and management software. Spanning's ...

## SearchConvergedInfrastructure

• ### VxRack Flex at the center of Medicity's SaaS cloud

Hungry for power, healthcare SaaS provider Medicity installed Dell EMC VxRack Flex rack-scale HCI for private cloud to store ...

• ### VMware pulls vSAN 6.7 deeper into vSphere

By giving the vSAN 6.7 release more of a vSphere look and common management, VMware hopes to lure more of its customers to use ...

• ### Take the converged vs. hyper-converged infrastructure quiz

Before you start the decision-making process over which IT approach you should take, quiz yourself on your hyper-converged and ...

Close