This article can also be found in the Premium Editorial Download "Storage magazine: Flash storage technology decisions."
Download it now to read this article plus other related content.
Some observers have predicted the demise of Fibre Channel (FC) technology for years, but no networking tech has risen above it for mission-critical applications.
At least two technologies have tried to overtake Fibre Channel (FC) in the past decade: Ethernet and InfiniBand. Both have failed and FC use continues unabated. Why is that happening, and what's the future of FC?
Fibre Channel technology is here to stay for many years to come, but the reasons aren't always clear cut. To understand, one has to look back, even past the origin of FC.
Defined in 1988, Fibre Channel was designed specifically for high-reliability, high-performance SANs to allow storage to be shared by many hosts running a variety of applications while delivering predictable performance. Because FC was designed strictly for storage, the technology understood the value of data integrity and that data had to be moved with 100% certainty from source to destination. It also provided multi-tenancy while maintaining low latency. These were new concepts that were different from those applied to networking technologies such as Ethernet and TCP/IP. As a result, FC flourished and grabbed the lion's share of high-end and midrange mission-critical application traffic. Network-attached storage also gained in popularity, but was used for file sharing rather than high-performance applications.
It's important to understand that both direct-attached and FC SAN storage use SCSI as the underlying protocol, just with different transports. Because both technologies move SCSI traffic, no application changes are necessary.
In the early 2000s, iSCSI -- a SAN technology that uses standard Ethernet at the transport layer -- came on the scene to challenge FC. Proponents argued that Ethernet is all-pervasive, so using it meant one network fabric could service both storage and networking traffic. It would be cheaper and easier to manage than FC. Pundits predicted iSCSI would replace FC in just a few years.
But that hasn't happened. While iSCSI has done well enough to quickly become a multibillion dollar business, it did little to displace FC for mission-critical enterprise applications.
More recently, InfiniBand has emerged as a challenger to FC technology. So far, it's been used as an internal storage interconnect in enterprise storage environments, rather than as a replacement for FC.
To understand why FC continues to enjoy top status for mission-critical applications, you need to look at seven factors.
1. Reliable, lossless operation. To avoid unpredictable or inconsistent performance resulting from dropped packets and retransmissions, along with the loss of connection, storage networks need to be fully lossless.
2. Predictable and consistent performance. Storage networking technology must deliver predictable and consistent performance bandwidth and IOPS with low latency, even when serving a multitude of applications.
3. Scalability. Storage networks must scale easily without requiring disruptive restarts. This is especially important in virtual environments where dozens of applications may be vying for resources.
4. Quality of Service (QoS). Based on automated policies, important applications must get the resources they need before lower priority applications are served.
5. Multivendor compatibility and interoperability. Users must be able to mix and interoperate different vendor's equipment based on industry-defined standards.
6. Investment protection. Investments must be protected, at least over several generations of storage networking equipment to help realize maximum value.
7. Credible technology roadmap. The storage networking technology must have a consistent and credible roadmap so users can be confident that the technology will continue to grow and develop.
On the surface, Fibre Channel and Ethernet, with all the associated protocols that ride atop them (iSCSI, FC over Ethernet, NFS/CIFS and others), look like they meet these seven criteria equally well, but a peek under the covers reveals big differences.
FC's biggest advantage is its lossless capability. Designed from the outset to be lossless, it doesn't allow packets to be dropped under any circumstances. It does that by using a credit buffer mechanism whereby the sending node can't even send a packet until it receives a certain amount of "credit" from the receiving node. Packets are received the first time around and without any need for retransmission.
Ethernet, designed for unpredictable wide-area networks, simply puts packets on the network blindly and hopes they get there. If not, the receiving node requests retransmission of lost packets. This plays havoc on application performance, especially for those applications that are transaction-based or particularly latency sensitive. Consistent application performance isn't possible when data can't be moved predictably.
Most FC products include a QoS mechanism to assign higher performance priorities to certain applications. Multipathing is another FC mechanism that provides multiple paths for important applications so that a failed path won't throttle performance.
Originally, Ethernet lacked these mechanisms, but the Ethernet specification was enhanced to essentially gain equality with FC. Data Center Bridging (DCB) was done primarily to allow FC frames to be encapsulated into Ethernet (FCoE), and thus allow Fibre Channel and networking traffic to flow on a common DCB Ethernet fabric. Given the enhancements, the assumption was that iSCSI was ready for primetime and would lure users away from Fibre Channel. A common 10 Gigabit Ethernet (GbE) fabric would be better than two separate fabrics and, at 10 Gbps, iSCSI would crush the 8 Gbps FC that was then prevalent.
But users simply didn't abandon FC for their most mission-critical applications. Part of the reason was that FCoE specifications hadn't been implemented uniformly by vendors in Ethernet devices and FCoE switches from different vendors didn't interoperate very well.
IT response was unambiguous: "Hands off my mission-critical applications. When DCB grows up, we'll take another look; for now, we're sticking with FC." And now, many FC users are upgrading their infrastructure from 8 Gbps to 16 Gbps FC.
In a 2013 survey, we asked users to tell us their top one or two protocols for their most important applications: 62% said FC, followed by 29% for iSCSI, 29% for DAS and 22% for NFS. FCoE was at 10%. Even with DCB Ethernet-based products available for several years, FC still rules the roost.
There's no question that Ethernet functionality keeps getting closer to Fibre Channel. But users need more than "equivalency" to make them leave FC. There's simply too much riding on these mission-critical applications, especially now that we're headed toward 80%-plus virtualized workloads, and with big data and cloud putting more pressure on the underlying infrastructure. I expect FC to dominate the field in support of mission-critical applications for years to come, especially in large IT environments.
About the author:
Arun Taneja is founder and president at Taneja Group, an analyst and consulting group focused on storage and storage-centric server technologies.
This was first published in April 2014