Feature

The best way to expand a SAN

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Lessons learned from creating and managing a scalable SAN."

Download it now to read this article plus other related content.

SAN architecture
Besides redundancy, the determining factors for your SAN architecture are performance, cost and scalability. From an acquisition standpoint, the least-expensive SAN design is accomplished by combining multiple, interconnected, smaller port-count switches. Unfortunately, the Google model of using a large number of low-cost components doesn't work well for SANs for two reasons. The links connecting the switches, also known as inter-switch links (ISLs), are prone to congestion and, as more switches are interconnected, performance becomes less predictable with a greater likelihood of bottlenecks. And a higher number of switches means more complex storage management, resulting in higher ongoing maintenance costs for the SAN.

    Requires Free Membership to View

SAN architecture choices

Consequently, one SAN design goal is to minimize the number of switches to eliminate ISL performance choke points. "A small to midsized SAN with more than four FC [Fibre Channel] switches in a single data center is a clear sign of a wrong SAN architecture," says James Opfer, research vice president at Gartner Inc.'s storage research group. With switch/director port counts ranging from eight to 512 FC ports in a single device, most small to midsized SANs can be based on a single dual-core architecture that eliminates ISL bottlenecks.

An important aspect of growing a SAN and SAN performance is the concept of locality. As a rule of thumb, the closer a server is to the storage array, the better performance will be. For instance, connecting a server and storage array to the same 16-port group on a 4Gb/sec blade on a Brocade Communications Systems Inc. SilkWorm 48000 director will result in optimal data throughput as the traffic will be locally switched to the destination port without having to leave the blade. If the server and array are on different blades on the same switch, all data needs to traverse the backplane. To make matters worse, if the server and storage array are on different switches, the data will also have to traverse an ISL link.

In small to midsized SANs, storage optimization by harnessing locality is manageable, but it's impossible to control in very large SANs with tens of switches and thousands of ports. If the example above described a SAN with five daisy-chained switches, with the server and array connected at opposite ends of the chain, the worst-case scenario could mean traffic between the server and array traversing five ISL links.

Hence, the design strategy for large and very large SANs is to confine the switch distance by tiering the SAN and introducing dedicated server switches (server tier) connecting back to core switches (core tier). While storage arrays and tape drives are directly connected to core switches in a two-tier architecture, three-tier architectures introduce a dedicated switch tier for storage arrays and tape drives. The benefits of a tiered SAN architecture are scalability, simplicity and predictable performance (see "SAN architecture choices," at right).

This was first published in July 2006

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: