Not just a big switch

Fibre Channel directors don't just provide lots of ports, they also offer ways to connect disparate SANs, isolate data and devices within a fabric, and configure throughput for specific applications. We look at how the big three directors match up.

This article can also be found in the Premium Editorial Download: Storage magazine: Storage Products of the Year 2005:

Fibre Channel directors offer ways to connect disparate SANs, isolate data and devices within a fabric, and deliver different levels of throughput on the fly. Here's how the big three directors match up.

NO LONGER JUST A BIG BOX with lots of ports, the Fibre Channel (FC) director has become the cornerstone around which next-generation SANs will be built. As more organizations are faced with managing petabytes of storage, director-class switches are easing management tasks by isolating SANs within a single fabric, delivering a higher level of data protection, and parceling throughput to individual ports depending on changing application demands.

Fibre Channel directors: Core components
Click here for a comprehensive list of Fibre Channel directors: Core components (PDF).

Of course, some things never change: First and foremost, companies look to directors to provide rock-solid stability with high levels of availability, throughput and port count. In this vein, the passive backplanes that are used in Brocade Communications Systems Inc.'s SilkWorm 48000, Cisco Systems Inc.'s MDS 9509 and McData Corp.'s Intrepid 10000 (i10K) nearly eliminate the possibility of failures. Each of these models also supports at least 1Tb/sec of internal bandwidth in a single chassis and 384 FC ports in a single rack; Brocade and Cisco offer configurations that support up to 768 FC ports in a rack. But as some vendors pack more ports into their line cards to meet growing user capacity demands, they're using port oversubscription to do so.

Port oversubscription occurs when the amount of internal switching fabric bandwidth allocated to a switch port is less than the device connection speed at that port. For example, if a port on an FC switch has a connection speed of 2Gb/sec, but is unable to achieve a wire-rate of 2Gb/sec, then the port is said to be oversubscribed. As a result, administrators need to plan how and under what circumstances to deploy these high port-count line cards.

Core components
With core components such as passive backplanes, concurrent microcode upgrades, purpose-built ASICs and redundant hardware components essentially equal among FC directors, vendors are finding other ways to differentiate their products. And with the growing need for higher port counts, distance replication and connecting SAN islands, vendors are adding functionality to FC directors in the following key areas:

  • Line cards
  • 1Gb/sec, 2Gb/sec, 4Gb/sec and 10Gb/sec FC ports
  • FC port buffer credits
  • Inter-switch link (ISL) aggregation and connectivity options
Vendors offer FC director line cards that allow users to configure directors to support a variety of port speeds and counts. For instance, Brocade's SilkWorm 48000 offers three different line cards that each support a different number of ports and port speeds. Brocade's FC4-16 and FC2-16 line cards each provide 16 FC ports, with the FC2-16 supporting 2Gb/sec and the FC4-16 4Gb/sec. For users to achieve the maximum 768 port count on the SilkWorm 48000, they need to use Brocade's FC4-32 line cards.

There are tradeoffs when maximizing port capacity and using faster FC speeds. McData's i10K supports 10Gb/sec FC ports, but those ports can only be connected to other i10K directors with the same 10Gb/sec FC ports because 10Gb/sec FC ports are based on a different technology than those at 1Gb/sec, 2Gb/sec and 4Gb/sec speeds.

Users of Brocade's SilkWorm 48000 will encounter similar issues. A SilkWorm 48000 fully populated with its FC4-16 line cards is the company's only 4Gb/sec configuration that allows its FC directors to operate at 4Gb/sec without any blocking of bandwidth. Brocade's FC4-32 line cards allow scaling up to the maximum port count on its 48000, but that configuration can only operate at a maximum of 2Gb/sec without blocking.

Despite these concerns, lower per-port prices are driving the move to line cards with higher port counts. Vendors report that users should generally expect to pay a 10% to 25% premium for line cards that support a higher number of ports. And despite the potential of back-end bottlenecks using the higher FC port count cards, most users aren't at risk, say vendors, because few production environments are reaching throughput limits on their FC directors. Cisco recently checked the utilization rates in its own production environment and found that most of its FC ports were averaging only 13MB/sec. This prompted Cisco to see if it could lower its own internal costs by increasing the number of ports on its blades.

To find the right balance between low and high port-count line cards, you need to identify the specific configurations and applications that require high-throughput FC ports. Applications such as backup/recovery and data replication, and FC ports dedicated to ISLs require high port throughput. By taking advantage of the port buffer credit and ISL aggregation features on the director ports, and by balancing which application or configuration uses which FC ports, there may not be a need to purchase lower port-count line cards.

Distance replication
The primary benefit of port buffer credits is to keep data flowing across distances. The size of the buffer credit needed on each FC port will depend on four factors:

  • The amount of data going through the port
  • The speed of the port
  • The distance between the FC ports
  • If the WAN gateway devices used provide additional buffering
Default port buffer settings on most directors will work fine without adjustment. Although the default settings range from eight on Brocade's SilkWorm 48000 to 16 on McData's i10K, these settings will work fine for most locally attached AIX, Hewlett-Packard (HP) Co., Sun Microsystems Inc. and Windows servers, and most storage arrays. When FC ports are used for distance replication, more buffer credits are generally required.

For distance replication, vendors generally recommend approximately one port buffer credit for every kilometer over a 1Gb/sec link. In most situations, you'll only need to devote a few FC ports for long-distance replication with the rest of the ports reserved for local connectivity. To provide as much flexibility as possible, vendors offer choices for how buffer credits can be configured and re-allocated among ports.

The ability to allocate buffer credits to FC ports lets users install higher port-count line cards and meet high-throughput replication and normal FC connectivity requirements. For example, each line card on McData's i10K has a pool of 2,746 buffer credits to draw from, with 1,373 buffer credits available per line card processor. Each of the 1,373 buffer credits may be redistributed among any of the FC ports associated with the processor. So in instances where data needs to be replicated to a site 190km away, McData recommends assigning 1,125 buffer credits to each 10Gb/sec link. By using the i10K line card that offers 24 2Gb/sec FC ports and two 10Gb/sec FC ports, each of the two 10Gb/sec ports can be allocated 1,125 buffer credits; this leaves 248 buffer credits total or 20 buffer credits for each of the remaining 2Gb/sec FC ports, which is more than enough for local FC connectivity.

Cisco is the only director vendor that allows users to increase the number of buffer credits, up to 3,500, with an optional license. The only other way to attain sufficient buffer for distance replication is to use a gateway device. Gateway appliances such as Ciena Corp.'s CN 2000 or Nortel Networks' Optical Metro 5000 series (formerly OPTera Metro 5000) offer additional buffering that enables the directors to do long-distance replication over FC when they're connected to SONET or dense wavelength division multiplexing (DWDM) networks.

ISL aggregation
ISLs connect different SAN fabrics and can be aggregated to increase throughput. Vendors offer different techniques beyond the standard route balancing via fabric shortest path first (FSPF) to improve ISL throughput, including:

  • Open trunking (McData)
  • Frame-based striping or advanced ISL trunking (Brocade)
  • Load balancing based on source, destination and exchange IDs (Brocade/Cisco)
McData's optional open-trunking technology on its 6140s provides automatic, dynamic and statistical load balancing across ISLs in a fabric. This feature monitors the FSPF routing database and when it detects a congested link, it tweaks the FSPF database so it reroutes the traffic to a less-trafficked link. Because McData allows the aggregation of 1Gb/sec, 2Gb/sec and 10Gb/sec ISLs between its directors, this prevents a 1Gb/sec link from becoming overloaded if it's the shortest path between two ports; in the event a 10Gb/sec ISL starts to fill up, it allows users to add any free 1Gb/sec or 2Gb/sec FC port on any line card to the aggregated ISL without requiring the need to consume another 10Gb/sec FC port.

Brocade offers two techniques that optimize throughput on ISL links between two of its FC directors. Its frame-based striping, or advanced ISL trunking, treats all ISLs between two FC directors as one logical ISL, allowing users to configure up to eight FC ports as one logical trunk between two SilkWorm 48000s. To overcome the designed intolerance of Fibre Channel for out-of-order FC frames, Brocade uses special ASICs on its line cards to allow frames to arrive out of order and to then be re-ordered to their original sequence. The primary drawback to this approach is the requirement that all FC ports must be on a single line card, which creates some availability concerns should an entire line card fail or need to be taken offline for maintenance.

The other technique, load balancing based on sender, destination and exchange ID, is supported by Brocade and Cisco, but is implemented in a manner that doesn't support interoperability between their directors. On Brocade's SilkWorm 48000, it's offered only on line cards that handle the 4Gb/sec protocol, while Cisco supports it on all of its 1Gb/sec and 2Gb/sec line cards.

The main differences between the two is that Cisco lets users aggregate any 16 ports on any line card in a chassis to form this logical ISL, while Brocade supports up to eight FC ports that must be on a single line card. Both configurations support a maximum throughput of 32Gb/sec, with Cisco using 2Gb/sec links and Brocade using 4Gb/sec links.

As organizations connect to independent and often rogue SANs through the use of WAN links and ISLs, new challenges arise. On the positive side, unused FC ports and additional storage capacity can often be harnessed in ways not previously considered. On the flip side, these connections bring the often unmanaged chaos that reigns in smaller SANs to the central data center: out-of-date microcode, lack of central supervision, little or no change control processes, personnel with varying degrees of expertise, and departments with their own ideas on how the SAN should be managed. These issues require a new set of services to be supported by the SAN fabric, and FC directors are where many organizations are initially turning to find answers.

SAN isolation
FC director vendors offer a number of techniques to mitigate those risks by isolating SANs logically while connecting them physically:

Fibre Channel director pros and cons
Click here for a comprehensive list of Fibre Channel director pros and cons (PDF).

  • Cisco's Virtual SANs (VSANs) and InterVSAN Routing
  • Brocade's Logical SAN (LSANs)
  • McData's hard partitioning
Cisco delivers VSAN and InterVSAN routing capabilities as part of the MDS 9506 and 9509 core SAN-OS; these technologies differ in at least three ways from other vendor's implementations.
  1. VSANs may be configured using any combination of ports on any line cards on one of their directors. Because SAN growth and consolidations are more haphazard than well-planned, VSANs give users the flexibility to plug their servers and storage devices into whatever FC ports are available. Once connected, users can then design their VSANs around the ports they're plugged into vs. trying to re-architect port connectivity on the director every time the environment is reorganized.


  2. Each VSAN can be set up with its own administrator. As SANs merge, internal and external policies and politics may dictate that certain administrators retain the right to control and manage their segment of the SAN.


  3. InterVSAN routing allows specific devices in one VSAN to be exposed and used by devices in another VSAN. For example, if the administrator of the finance VSAN needs additional storage capacity and the engineering VSAN administrator has some capacity available, finance can access it without compromising the integrity of either VSAN. It's done by simply sharing ports using InterVSAN routing without allowing access to every device in the VSAN.


For users who want to share resources among different SANs, but are reluctant to do a forklift upgrade or introduce a new FC vendor, Cisco is working to eliminate interoperability issues with the other two primary FC director vendors. Its MDS 9506 and 9509 FC directors support Brocade in native mode, and Cisco says it's working on supporting McData in native mode as well. With this approach, the only major features that will be lost are Brocade's and McData's advanced ISL aggregation features. This will only become a major issue when performance between the two different vendors' SANs is a significant concern.

Brocade's LSAN technology is similar to Cisco's VSAN and InterVSAN routing features, but Brocade takes a modular instead of a bladed approach using its SilkWorm AP7420 Multiprotocol Router to deliver this functionality. Like the VSAN and InterVSAN routing technology, the AP7420 logically isolates SANs while giving users the ability to share specific devices in one SAN with other logical SANs. The best fits for organizations considering Brocade's AP7420 are:

  • Users who plan to continue using either departmental Brocade or McData switches, but who need to access or share specific resources between those different SANs.


  • Companies that want to start isolating SANs with an appliance, but want to grow to a bladed FC director. The AP7420 appliance allows enterprises to isolate SANs inexpensively and to migrate to Brocade's upcoming intelligent blade for its SilkWorm 48000.


McData uses a couple of different methods to tackle SAN isolation. To connect isolated FC and iSCSI SAN fabrics to one central fabric, it takes a modular approach with the firm's Eclipse 3300 and 4300 multiprotocol SAN routers. However, the Eclipse products support only 802.1Q Virtual LAN (VLAN) and metro SAN (mSAN) capabilities. While this allows for connecting SAN islands into a central SAN fabric, it lacks the ability to expose specific SAN ports, such as those from backup servers or virtual tape libraries, to certain devices on the other SAN.

McData's i10K director is the only director that supports hard partitioning. Unlike logical SAN isolation techniques, hard partitioning is done at a physical rather than a software level on the director. Configurable on a per-line-card basis, each line card may be its own virtual SAN or multiple i10K line cards may be combined to configure a larger virtual SAN. Each of these configurations can run different versions of microcode, which allows a storage administrator to test new microcode on a partition before rolling it into production. However, the i10K suffers from the same fundamental flaw as the Eclipse products because it doesn't allow partitions to share selected device ports and route traffic between them.

Virtualization
Users aren't just looking to FC directors to isolate SAN fabrics; they also want them to virtualize storage and ports. After years of stops and starts, virtualization technologies are finally gaining some momentum and moving to FC directors.

So far, Cisco's MDS 9500 series and Maxxan Systems Inc.'s MXV500 are the only FC directors that support storage virtualization. Cisco's Cache Services Module (CSM) line card and Maxxan's MXV500 are similar in that they support virtualization software based on network caching. Cisco supports the director blade version of IBM Corp.'s SAN Volume Controller and Maxxan supports FalconStor Software Inc.'s IPStor. Cisco's other line card, the Storage Services Module (SSM), supports virtualization applications that don't use network cache like EMC Corp.'s Invista and Incipient Inc.'s Network Storage Platform (NSP).

Even though the SSM supports virtualization, it comes configured with 32 1Gb/sec and 2Gb/sec FC ports, and can operate with or without virtualization enabled. Cisco prices it just slightly above the cost of its normal 32-port line card. The SSM module also supports Cisco's SANTap protocol, which can communicate with Kashya Inc. and Topio Inc. so their products can make copies of FC writes as FC packets pass through the director. This enables applications like data replication and continuous data protection at the fabric level without the need to deploy host server agents or use array-based tools.

Server virtualization technologies like Linux partitions and VMware are also driving a major FC director virtualization technology, called N-Port ID Virtualization (NPIV), which all of the large director vendors are offering on their latest directors. NPIV solves the problem created when multiple server partitions log into the SAN using the same physical host bus adapter (HBA) card. NPIV lets FC directors dole out a unique port ID to each server instance. For example, in a configuration where eight logical Windows or Linux instances reside on the same physical hardware, each logical server instance can log into the FC director and get its own fabric ID. This allows the fabric to control and route the traffic from each Windows server instance to its allocated resources. The unique ID prevents all of the server instances behind an individual server HBA from having access to all of the resources reserved for a specific HBA.

FC directors are solidifying their position in the enterprise. Higher port line cards with configurable buffer credits and different options for aggregating ISLs allow directors to connect the lowest tiers of storage to the enterprise, and to connect remote and local SANs over high-throughput FC links. And with technologies like InterSAN routing and NPIV maturing, FC directors are well-positioned to meet today's, and tomorrow's, challenges.

This was first published in February 2006
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close