With much in common between
Latency, throughput and oversubscription comparison
The Brocade DCX Backbone and Cisco MDS 9500 Series are both chassis based, can be scaled and present no single point of failure and support all relevant storage networking protocols.
Cisco offers three MDS 9500 models: The MDS 9513, supporting up to 528 Fibre Channel ports in a single chassis, is targeted at enterprise networks. For smaller networks, Cisco offers the MDS 9509 featuring nine slots, and the MDS 9506 with six slots.
While both the Cisco and Brocade platforms can be used to power mission-critical storage-area networks (SANs) with comparable results and user experience, there are noticeable differences between the two.
The DCX Backbone supports local switching, which results in lower latency for devices connected to the same blade and improved scalability by reducing the amount of traffic that has to pass through the core switching blades.
Although Cisco rebuffs the local switching benefit, emphasizing bigger latency variances as a result of local switching, support for local switching in its latest Nexus platform suggests that the lack of local switching support in the MDS 9500 is a disadvantage.
In addition to reliability, performance and throughput are the most relevant attributes of a director platform. The Brocade DCX Backbone currently wins the raw throughput comparison with 256 Gbps throughput per slot vs. 96 Gbps for the Cisco MDS 9500.
Despite claims that both platforms require less SAN architecting, each director platform has idiosyncrasies a SAN designer needs to consider. The SAN design effort of the MDS 9500 will likely be related to managing oversubscription and traffic prioritization. Correspondingly, the DCX Backbone requires SAN architects to take latency variances between different ports within the same chassis into account.
Fibre Channel over Ethernet and CEE/DCB support
Compelled by the prevalence of Ethernet and its enhancements, and the success and simplicity of iSCSI, Brocade and Cisco have embarked on bringing Ethernet into the well-guarded FC domain via Fibre Channel over Ethernet (FCoE). FCoE uses Converged Enhanced Ethernet (CEE), now known as Data Center Bridging (DCB) -- Cisco formerly called it Data Center Ethernet (DCE) -- as the physical network transport to deliver Fibre Channel payloads. However, unlike its Ethernet brethren, it's lossless and appears as native Fibre Channel to the operating system and apps. Unlike iSCSI, it's not routable and is designed as a low-latency, high-performance Layer 2 data center protocol.
Both Brocade and Cisco are committed to FCoE, but each one has its own FCoE strategy. Brocade now supports pre-standard FCoE and CEE/DCB in its top-of-rack Brocade 8000 switch, which began shipping in June, and in the FCoE 10-24 blade switch, which fits into its DCX Backbone and began shipping in August. Older Brocade Fibre Channel products, such as the 48000 Director, will connect into the Fibre Channel ports of the DCX Backbone or the top-of-rack Brocade 8000 switch.
With the Nexus 5000 Series top-of-rack switch, Cisco was the first vendor to offer a pre-standard FCoE product. For the MDS 9500 director family and Nexus 7000 Series switches, DCE/DCB and FCoE support won't be available until standard ratification, similar to Brocade's plans.
Regardless of whose product you choose, both platforms will reliably power your SAN, which is confirmed by the myriad storage-area networks currently powered by Brocade and Cisco. Both vendors are embracing the converged Ethernet paradigm in their product roadmaps, but unless you're willing to debug the initial CEE/DCE flaws as an early adopter, you're well advised to wait for at least another year until the standard and products have matured.
This article originally appeared in Storage magazine.
This was first published in January 2010