Managing and protecting all enterprise data


FCoE: Coming to a data center near you: Hot Spots

Fibre Channel over Ethernet (FCoE) has the potential to reduce data center complexity and make the world a little greener by decreasing the number of cards, cabling and network devices in the data center.

Fibre Channel over Ethernet is speeding along the certification path, and now is the time to determine what it can do for you.

By now, you've probably heard the hype surrounding Fibre Channel over Ethernet (FCoE). Why should you care? Because FCoE has the potential to reduce data center complexity and make the world a little greener by decreasing the number of cards, cabling and network devices in the data center. In some large organizations, the ability to reduce cable bundles could have a positive impact on air flow and reduce cooling costs.

The Ethernet part of the protocol isn't just any Ethernet, but a special, still-to-be-ratified Data Center Ethernet (DCE). To make Ethernet suitable for Fibre Channel transport, the 802.1Q IEEE standard is being modified to accommodate data center traffic to improve its priority traffic flow and allow it to operate in a lossless manner (no dropped packets).

The goal is to deliver FC a different network protocol leveraging an Ethernet fabric, while maintaining the same or better performance that Fibre Channel-only networks have enjoyed. Based on the IEEE modifications currently being considered, our best guess puts the timeframe for the protocol's delivery around Q4 2008. The protocol must be ratified by the FC standards body and the INCITS T11 Technical Committee.

Is FCoE necessary?
Now that we understand FCoE, the real question is whether or not we need it. Why not jump to iSCSI? Basically, FCoE provides an elegant way to migrate Fibre Channel to Ethernet, while protecting existing FC investments and skill sets. While iSCSI utilizes TCP layered on top of IP, neither FCoE nor iSCSI is routable. InfiniBand also looked promising in this area, but has so far only found a home in high-performance computing environments and hasn't seen widespread adoption otherwise.

So why do we want to converge Fibre Channel and Ethernet into a single fabric? By combining FC and Ethernet, a single cable and a single card can replace current network interface and host bus adapter (HBA) cards. And because every new technology needs a three-letter acronym, the resulting interface card will be known as a Converged Network Adapter (CNA). These will feed switches or directors, and accommodate FC or FCoE and Ethernet over 10Gig links. In addition, the same switches and directors will handle both protocols while simultaneously allowing storage and networking domains to control their traffic independently. This preserves the separation of the storage and networking management domains while consolidating hardware.

Who's involved in this effort? From a storage perspective, we should look at the usual suspects: Brocade, Cisco Systems (Nuova Systems), EMC, Emulex, Finisar, Hewlett-Packard, IBM, NetApp, QLogic and Sun. Other players interested in participating in the FCoE market include Blade Network Technologies, Broadcom, Intel and Mellanox Technologies. The most obvious and most vocal proponents of this technology have been Cisco and its recently annexed incubation company, Nuova (the former Andiamo team).

Beneath the marketing gloss, FCoE looks very promising. It's certainly moving down the certification route faster than any other protocol in recent history, on track to go from inception to production in less than two years. This has been accomplished through universal backing by all participants, as well as by Nuova relinquishing its patents to FCoE with the understanding that proprietary protocols don't create a big market. Does all this unprecedented cooperation guarantee success? Probably not, but it should help to relieve any interoperability concerns.

Most new technologies aren't implemented in production environments until they're fully tested and proven. For users, the key to FCoE will be understanding its impact on data center operations. Once test and development cycles have been completed, early deployments will most likely be in the form of server fan-in environments--especially high-density blade server environments--and then move into the core from there.


Issues to consider
But there are some FCoE issues that go beyond technology and certification. Most of them are related to culture and domain segmentation. While we do see some progressive IT shops working to integrate technology silos, convergence is still a challenge in many places. Here are some critical questions you'll want answered:

  • Will the storage team or the networking team own the infrastructure? If co-managed, who has the deciding vote?

  • Which department will pay for it? How will chargeback be calculated and future growth determined?

  • Will the teams be integrated? Typically, the networking team is responsible for IP switches, while the storage team is responsible for Fibre Channel.

  • Who will own day-to-day operational issues? If a decision needs to be made regarding whether more bandwidth is given to LAN or SAN traffic, who makes the call? Will companies have to create a single, integrated connectivity group?

Aside from potential cultural issues that must be faced, another question is whether FCoE is compelling enough to merit a forklift upgrade. At this time, I don't believe we'll see FCoE conversion projects; rather, FCoE will probably be implemented as part of a bigger IT project, like server virtualization or a storage array technology refresh. Because some new hardware (read: capital expense) will be required, justifying the investment may be difficult, especially considering the macro environment. However, if the math becomes compelling enough, say below $500 per port, adoption may accelerate.

So why should you be thinking about FCoE now? Products are becoming available--specifically, the Cisco Nexus 5000 and Intel Adapters--and some vendors have made claims that FCoE will be in production environments this summer. Users currently testing FCoE environments using alpha/beta equipment seem quite satisfied with it. With some products available now and more due this fall, I say companies should consider creating a test and development platform for FCoE to become comfortable with the technology. Companies should definitely be planning to include FCoE in their 2009 budget if it isn't already in this year's.

The future of FCoE
FCoE appears to have all the technology bases covered. It enables companies to retain an existing FC infrastructure, keeps existing FC management tools in place, provides the same level of performance (with DCE) and lowers costs. The biggest variable in all this is the economics of FCoE solutions. Given the present economy and the pressure IT is under to reduce costs, compelling FCoE pricing may accelerate adoption faster than any marketing pitch or certification.

What will be the role of Fibre Channel going forward? Saying that FCoE will be the end of Fibre Channel makes for good headlines, but the reality is that FC is here to stay, at least for a while. The Fibre Channel Industry Association continues to drive toward 16Gig FC. Fibre Channel will co-exist with FCoE for a number of reasons. FCoE still needs to be tested and proven, and FC will continue to deliver services until then. Given that data centers are very slow to change--with plenty of people taking the "If it ain't broke, don't fix it" position--adoption could take a while. Lastly, there will always be protocol zealots resisting any new technology. It should be noted, however, that FCoE was designed to co-exist with FC and these technologies will work with existing management tools.

The green impact of FCoE is a compelling factor. The combination of lower power and cooling requirements (not from the protocol, but from the reduction in equipment) with reduced cabling will certainly be attractive as greater emphasis is placed on green initiatives.

You could also consider FCoE the beginning of the end for the cultural barriers that exist between technology domains in large-scale data centers. At the very least, it provides the storage team with an opportunity to get to know the networking team so they can work together to provide higher levels of service to the business and reduce costs. Ultimately, it could provide an opportunity to embrace the sort of change that delivers significant benefit to the business.

Article 20 of 20

Dig Deeper on SAN technology and arrays

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All