Plans for Fibre Channel over Ethernet storage (FCoE storage) started to gel more than three years ago, and while many of the pieces to allow IT shops to run LAN and SAN traffic over a 10 Gigabit Ethernet (10 GbE) converged network are in place, they're not all ready: Multi-hop, switch-to-switch FCoE involving a core switch isn't yet possible. But FCoE prospects should improve toward the end of this year as key standards inch closer to finalization.
In the meantime, the most common approach is to go with FCoE between the servers and top-of-rack switches that separate Ethernet LAN traffic from Fibre Channel (FC) SAN traffic. The SAN traffic continues via FC to the core switches and storage arrays. Only the access layer between the servers and switches is 10 GbE.
Far fewer IT shops have blazed the full FCoE trail from the front-end servers to the back-end disk arrays. NetApp Inc. claims that a small number of its customers run FCoE end-to-end from top-of-rack switches directly to its storage. But enterprise IT shops tend to prefer a more sophisticated, scalable architecture that includes one or more core switches, such as Brocade Communications Systems Inc.'s DCX Backbone or Cisco Systems Inc.'s MDS Multilayer Directors or Ethernet-based Nexus 7000, to aggregate their data storage traffic.
Last fall, Brocade began shipping an FCoE 10-24 blade switch for its DCX Backbone that enables customers to do end-to-end FCoE if used with NetApp storage. But Stuart Miniman, principal research contributor at The Wikibon Project, advises against using the switch in production until Brocade has a high-availability solution later this year. Meanwhile, Brocade rival Cisco has pledged FCoE support for its Nexus 7000 and MDS switches by year's end. Cisco currently supports FCoE in its Nexus 5000 top-of-rack switch and Nexus 4000 blade switch for x86-based blade servers.
The Nexus 4000 and Nexus 5000 are among the few switch pairs that can go FCoE-to-FCoE. The Nexus 4000 supports the FCoE Initialization Protocol (FIP) and, as a result, can connect to the Nexus 5000, an FC Forwarder (FCF). But standards that enable FCoE between multiple FCFs remain under development.
An FCoE standard is a concern of more than a few IT shops as they weigh the merits of the technology. The main standard allowing Fibre Channel to run over Ethernet was approved a year ago and finally reached ANSI standardization this year. However, the Data Center Bridging (DCB) Ethernet enhancements to ensure that FCoE traffic can be transported without packet loss and that bandwidth can effectively be shared between LAN and SAN traffic have yet to reach completion.
"The position of the networking component vendors is that even though the standards are not finally signed off, they have been stable and debated to the point of stability for quite some time," said Robert Passmore, a research vice president at Gartner Inc. "Their belief is that, if there are last-minute changes, they'll be able to deal with them in firmware. Whether that's true or not, we'll know when it's all over."
Once all of the pieces for the converged network fall into place, the potential upside includes fewer cables and adapters, consolidated switch ports and lower power consumption.
Those who want to get started with FCoE storage right now will need the following:
The main FCoE cable options are twinaxial copper, known as twinax, and OM2 and OM3 fiber optic. The black twinax needs less power and costs less than the OM2 and OM3, but its distance limitations will likely restrict its use to server racks. For especially large or geographically dispersed data centers, some users may need to forego multi-mode fiber optic cable in favor of single-mode fiber optic, which can traverse greater distances.
Users also need SFP+ transceivers for FCoE, but their resellers or server or storage vendors will likely take care of the details once they make the choice of copper or optical cabling.
A move to FCoE will likely dovetail with the purchase of new servers, and the equation will factor in converged network adapters (CNAs). The CNAs combine the functionality of an Ethernet network interface card (Ethernet NIC) and Fibre Channel host bus adapters (Fibre Channel HBA), and reduce the number of adapters an IT shop needs to purchase.
Brocade, Emulex Corp. and QLogic Corp. initially delivered CNAs to their OEM partners and resellers in the traditional form factor that slides into a PCI Express (PCIe) slot. More recently, they've been working on CNAs for the server motherboard and CNAs embedded as mezzanine cards in blade servers. Hewlett-Packard (HP) Co., for instance, announced a partnership with Emulex to integrate its Universal CNA (UCNA) on server motherboards.
The price differential isn't insignificant between CNAs for copper and short-reach optical. The manufacturer's suggested retail price (MSRP) for an Emulex dual-port OneConnect UCNA for FCoE with direct-attach copper is $1,775 and $2,695 with short-reach optical. QLogic's MSRP for its dual-port CNA for copper is $2,795; short-reach optical is $4,130.
Users typically get their CNAs from resellers, or their enterprise server or storage vendors. EMC Corp., for instance, resells CNAs from Brocade, Emulex and QLogic, while NetApp offers CNAs from Brocade and QLogic.
Brocade and Cisco have supported FCoE in their fabric switches since last year, but uniform support in chassis-based, director-class switches and multi-hop capabilities are still missing.
Customers still can't go Fibre Channel over Ethernet from a top-of-rack switch to a core switch, or from one top-of-rack switch to another top-of-rack switch such as Cisco's Nexus 5000 or Brocade's 8000. Switch-to-switch FCoE communication is restricted mainly to switches embedded in blade servers and top-of-rack switches.
Until its Nexus 7000 and MDS add FCoE support, the Cisco FCoE switch products consist of the Nexus 5020 with 40 fixed 10 GbE ports and two expansion modules; the Nexus 5010 with 20 fixed 10 GbE ports and one expansion module; the Nexus 4000 switch for blade servers; and the Nexus 2232PP fabric extender.
The Brocade 8000 top-of-rack switch supplies 24 10 GbE ports and eight FC ports, while its FCoE 10-24 blade switch for the DCX Backbone has 24 enhanced 10 GbE ports. Servers can connect directly to the 10-24 blade switch, with the potential to go FCoE to NetApp storage.
In June, Hewlett-Packard announced a deal with QLogic for a new FCoE switch in its Virtual Connect FlexFabric 10 GbE/24-port module for its c-Class BladeSystem. The new offering, which is currently shipping, marks QLogic's entry into the FCoE switch market.
IT shops often purchase switches through their resellers or storage vendors. NetApp, for instance, resells Cisco and Brocade switches. EMC rebrands switches from Brocade and Cisco and sells them through its Connectrix product line.
NetApp was the first storage vendor to promote FCoE support, offering QLogic dual-port unified target adapters to plug into the PCIe slots of its high-end FAS6000 series; midrange FAS3100, FAS3040 and FAS3070 series; as well as the low-end FAS2050. The vendor's V6000 and V3100 also support Fibre Channel over Ethernet.
NetApp's long-term goal is to offer built-in 10 GbE ports on its controllers, but the company declined to specify a time frame.
In the meantime, NetApp claims it can allow not only FCoE traffic, but iSCSI and NAS, through a single port via unified target adapters. So far, the company has made available iSCSI via a product variance for proof-of-concept purposes only. NetApp has said it will add NAS support this year.
Beyond NetApp, data storage vendors haven't been in any mad rush to support FCoE. EMC, for instance, isn't due to add native FCoE support to its Clariion and Symmetrix arrays until later this year.
One potential drawback with Fibre Channel over Ethernet is the lack of management capabilities. Gartner's Passmore said the storage resource management (SRM) tools IT shops use to manage their Fibre Channel networks aren't able to monitor 10 GbE networks.
"The tools that normally show you the entire path, end to end, can't see the insides of the Ethernet network. It looks like a cloud," Passmore said. "And the tools that manage the Ethernet network aren't aware of storage. So, at least in the short term, the use of FCoE makes you blind to part of the path."
EDITORS' PICKS: Read the following articles for more information on FCoE storage.
FCoE storage: Fibre Channel over Ethernet explained
FCoE data center: Can data and storage network convergence go too far?
NetApp first to offer native FCoE SAN storage