The Brocade DCX Backbone is based on a shared memory architecture where data moves from switching ASIC to switching ASIC along multiple internal ISLs that make up the path from an ingress port to an egress port. To load balance between these inter-ASIC links within the switch, the DCX Backbone relies on either exchange- or port-based routing. "Besides fewer components on blades, which reduces the likelihood of failure, in a shared memory architecture ASICs on the core switching blades talk to ASICs on port blades using the same protocol, minimizing protocol overhead," Dunmire explained.
In comparison, the Cisco MDS 9500 leverages a crossbar architecture where frame forwarding is directly performed in ASICs on the line cards. The crossbar manages forwarding of packages, and a central arbiter ensures fairness and prioritization. While the MDS 9506 and MDS 9509 provide the fabric switching module and central arbiter on the supervisor blade, the MDS 9513 uses a separate pair of switching modules located in the back of the MDS 9513 chassis. "Unlike a shared memory architecture where traffic moves across internal switching ASICs along varying paths, resulting in varying latencies, in a crossbar architecture the latency between ports is consistent across all ports within the switch," said Omar Sultan, solution manager, data center switching, data center solutions at Cisco.
Even though each vendor claims its architecture is superior,
Nevertheless, there are noticeable differences between the two platforms. The DCX Backbone supports local switching, which allows traffic between ports on the same blade to be directly switched instead of having to go through the core switching module; this means lower latency for devices connected to the same blade and improved scalability by reducing the amount of traffic that has to pass through the core switching blades. Although Cisco rebuffs the local switching benefit, emphasizing bigger latency variances as a result of local switching, support for local switching in its latest Nexus platform suggests that the lack of local switching support in the MDS 9500 is a disadvantage.
In addition to reliability, performance and throughput are the most relevant attributes of a director platform. The Brocade DCX Backbone currently wins the raw throughput comparison with 256 Gbps throughput per slot vs. 96 Gbps for the Cisco MDS 9500. When combined with local switching, it can concurrently operate more ports at full 8 Gbps utilization than the MDS 9500, as verified by a February 2009 Miercom lab test (Report 090115B). As a result, the MDS 9500 depends to a greater degree on oversubscription than the DCX Backbone. In practical SAN reality, however, not all ports will operate at full 8 Gbps rate, and the use of oversubscription combined with traffic prioritization and QoS makes the throughput difference less significant. In the past, increases in port and chassis throughput benefited mostly ISLs and, to a lesser degree, servers; but now the proliferation of virtual server environments definitely makes bandwidth capacity more relevant. "Server virtualization is a game changer, making oversubscription more problematic because physical servers running many virtual machines are more likely to fully utilize a SAN link," Gartner's Passmore said. Cisco confirmed that it's working on a next-generation switch fabric module that will match the DCX's 256 Gbps slot throughput; existing customers will be able to upgrade by simply replacing the existing switch fabric module. "Replacing the switch fabric module costs an order of magnitude less than a forklift upgrade," noted Bill Marozas, business development manager, Cisco Data Center Solutions.
Despite each vendor's claim that its platform requires less SAN architecting, each director platform has idiosyncrasies a SAN designer needs to take into consideration to ensure optimal performance. In the case of the MDS 9500, the SAN design effort will likely be related to managing oversubscription and traffic prioritization. Correspondingly, the DCX Backbone requires SAN architects to take latency variances between different ports within the same chassis into account, as well as its use of port- and exchange-based routing to load balance inter-ASIC links. While both Brocade and Cisco support port- and exchange-based routing over external ISL links, Brocade's use of these protocols inside the switch has been somewhat controversial. Customers need to make a choice between one of two routing modes; despite repudiation by Brocade, benchmarks like the December 2008 Miercom report (Report 081215B) have shown slower performance if the switch is used with port-based routing instead of the default exchange-based routing; and some array vendors advise their customers to stay away from the DCX's default exchange-based routing for some of their arrays.
"HP does not typically make specific recommendations regarding switch routing, but we recommend using port-based routing with the StorageWorks Continuous Access EVA solution since exchange-based routing doesn't guarantee in-order frame delivery all the time across exchanges," said Kyle Fitze, marketing director for the StorageWorks Storage Platforms Division at Hewlett-Packard (HP) Co. Conversely, EMC and NetApp confirmed that all of their arrays work flawlessly using the DCX default exchange-based routing mode.
This was first published in June 2009