The appeal of 10Gb/sec Ethernet over copper is evident, but can it displace Fibre Channel in the data center?
BRACE YOURSELF FOR ANOTHER ROUND OF THE PERPETUAL storage networking competition of whose pipe is the fastest. Just as Fibre Channel (FC) pulls significantly ahead in the performance race with 4Gb/sec speed, IP/Ethernet is about to leapfrog FC with 10Gb/sec Ethernet over copper wire. And no sooner will data centers have absorbed 4Gb/sec FC than 8Gb/sec will arrive.
Do data center storage managers care about faster pipes? Judging from the current demand for high-speed storage networking links, perhaps not. In a survey conducted last June by the Milford, MA-based Enterprise Strategy Group (ESG), 50% of respondents reported using 1Gb/sec FC. "At that time, 2Gb/sec FC was just beginning to be widely deployed and 4Gb/sec FC was about to start up," says Tony Asaro, senior analyst at ESG. Based on that survey, "the market isn't demanding 8Gb/sec or 10Gb/sec," he concludes. "It's not yet even demanding 4Gb/sec FC."
But analysts see two uses for the greater storage networking speeds: disk-to-disk backup and disk archiving, and storage port consolidation. "This is about backup to disk and using 10Gb/sec as an aggregation point," says ESG's Asaro.
It's unclear if FC or IP/Ethernet will become the storage network of choice at this point. FC is the well-entrenched, high-performance storage networking technology in the data center. In a presentation last spring on storage networking trends, James Opfer, a research vice president in Gartner's San Jose, CA, offices, predicted that "iSCSI at 10Gb/sec everywhere will challenge FC in the SAN." Other analysts aren't so sure.
For data center managers intending to pursue faster storage networking links, the choice of FC or IP/Ethernet involves more than just the raw wire speed. They have to consider their existing investment in storage networking technology, their skill sets and the economics of each, just as they do now. Other factors, such as the need for TCP/IP offload engines or coordination with the corporate networking group, may also weigh in.
Backup to disk and consolidation
These fast storage pipes will appeal to companies adopting disk-based backup and archiving strategies, and those looking to consolidate storage infrastructures. "The faster speeds are going to become important when companies back up to disk. That's where the big pipes will come into play first," says Asaro. Backup to disk and data archiving can saturate a 2Gb/sec pipe today and have the potential to saturate a 4Gb/sec pipe in the future. When that happens, 8Gb/sec FC and 10Gb/sec Ethernet will be waiting in the wings.
The greatest interest in high-speed storage connections currently comes from big companies concerned about backup/recovery and archiving, according to TheInfoPro Inc., a New York research firm that tracks technology adoption. In its latest survey, it found respondents were looking at fast storage networking links, especially 4Gb/sec FC, for virtual tape libraries, disk staging, disk-to-disk services, data classification and archiving technologies. More than 60% of respondents were using, piloting or planning for 4Gb/sec FC technology. By comparison, approximately 45% said they were using or planning to use 10Gb/sec Ethernet.
Port consolidation may also drive the transition to faster pipes. A 4Gb/sec FC port can support twice as many servers as a 2Gb/sec port, while 8Gb/sec can support twice as many servers as 4Gb/sec. The improvement is even greater with 10Gb/sec Ethernet. In a recent report, Marc Staimer, president of Beaverton, OR-based Dragon Slayer Consulting, says that with 10Gb/sec Ethernet over copper, the shared storage ratio increases by an order of magnitude without the current premium price tag and other drawbacks of optical 10Gb/sec.
Implementing 10Gb/sec Ethernet
Implementing 10Gb/sec Ethernet over copper may not be as simple as upgrading from 1Gb/sec or 2Gb/sec Fibre Channel to 4Gb/sec. "10Gb/sec Ethernet impacts more than just storage. It impacts the corporate network and LAN connectivity," says Tony Asaro, senior analyst at Enterprise Strategy Group, Milford, MA. Whatever the storage manager decides, it will most likely have to be done after consulting with the corporate networking group. Implementation options include the following:
- Put everything--storage and corporate networking--on a new or upgraded converged 10Gb/sec Ethernet backbone.
- Use an extra NIC in each host to create a separate 10Gb/sec Ethernet path just for storage.
- Run two distinct networks, one for storage and one for corporate networking.
Which option you choose will depend on traffic volume, network architecture and cost
By 2007, the price of a 10Gb/sec Ethernet over copper port will be down to $1,000 or less, estimates Charlie Kraus, director of LSI Logic's host bus adapter (HBA) business unit. By pumping dozens, or even a hundred or more, servers through each port, the cost of each server connection becomes negligible.
With that level of port density, the economics of 10Gb/sec Ethernet over copper look more attractive, especially to large enterprises that have relied on FC for high-density port sharing. "We see 10Gb/sec Ethernet as an aggregation point. Companies will take a bunch of 1Gb/sec servers coming into the switch and connect them through a 10Gb/sec pipe to the storage," says Asaro.
Storage networking performance today
Until the advent of 8Gb/sec FC and 10Gb/sec Ethernet over copper, your only option for fast storage networking was 2Gb/sec FC, until the middle of 2005 when 4Gb/sec FC became commercially available. On the IP/Ethernet side, 10Gb/sec over optical was available, but too costly (more than $4,000 per connection) to be practical except for unusual situations. Organizations that wanted IP-based storage networking opted for iSCSI over 1Gb/sec Ethernet links.
The market sorted itself out on the basis of price and performance. Enterprise data centers opted for FC to get higher performance for their SANs and were willing to pay top dollar in terms of the cost of the components and skilled FC technicians. Small and midsized organizations went for the lower performance of 1Gb/sec Ethernet for iSCSI SANs in large part due to its lower cost and the ability to leverage existing IP networking skills.
FC holds the advantage in terms of performance, but iSCSI-IP commands a clear advantage from a cost standpoint. At 1Gb/sec, Ethernet requires only a simple NIC card ($50) in the server vs. an FC HBA ($800 to $1,200). The cost of Ethernet switch ports are a fraction of the cost of FC switch ports, and at 1Gb/sec, most servers don't even require a TCP/IP offload engine (TOE) to reduce the overhead of processing the IP stack.
Even without backup to disk or port consolidation projects in the works, interest in 4Gb/sec FC is building, says TheInfoPro. "It's mainly helpful for ISLs [inter-switch links], not for app performance. Still, companies say they'll take it even though they don't need the performance as long as they don't have to pay a premium," says Ken Male, founder and chief executive officer at TheInfoPro.
No worry there. Vendors don't expect to charge a premium for 4Gb/sec FC. "By the end of this year, everything will be 4Gb/sec FC at the same price as 2Gb/sec," says Greg Scherer, chief technology officer at Emulex Corp., an FC components provider in Costa Mesa, CA.
As with the shift from 1Gb/sec to 2Gb/sec FC, 4Gb/sec FC is backward-compatible, so it can ratchet down when it senses the slower link. When the move to 8Gb/sec FC occurs, the same should hold in terms of price and compatibility. Similarly, Ethernet has moved from 10Mb/sec to 100Mb/sec to 1Gb/sec without compatibility issues. The industry expects no compatibility problems moving to 10Gb/sec Ethernet over copper.
FC and IP roadmaps
FC dominates storage networking, especially in the data center, and will likely continue to do so through 2008 when 8Gb/sec FC is expected to be ready for deployment, according to the Fibre Channel Industry Association (FCIA) roadmap. By then, 10Gb/sec Ethernet over copper will be generally available, but the performance difference alone may not be enough to convince many FC shops to switch to Ethernet.
"The difference in speed between 8Gb/sec FC and 10Gb/sec Ethernet isn't that important," says Arun Taneja, founder, president and consulting analyst at the Taneja Group, Hopkinton, MA. "FC has significantly lower latency compared to IP, so it can match the performance even if the speed is a bit slower." Few companies are expected to abandon FC SANs in favor of 10Gb/sec Ethernet based on such a relatively small speed advantage.
"FC is the incumbent technology," says Taneja. "Even if a new technology is logically superior on paper and even if the pricing favors the new technology, companies that have already stabilized on FC aren't going to switch."
In the case of Sun Health Corp., an expanding group of community hospitals headquartered in Sun City, AZ, faster performance for its 17TB picture archiving and communications system (PACS) over 10Gb/sec Ethernet is enough to tempt Micha Ronen to switch from FC. "If we could get 10Gb/sec Ethernet over copper wire and iSCSI, we could do quite a bit," says Ronen, Sun Health's PACS administrator and systems architect. This could boost performance fivefold over Sun Health's current 2Gb/sec FC links. With the need to archive a rapidly growing set of medical images, the extra bandwidth certainly wouldn't go to waste.
Cost and difficulty
Most managers will choose FC or IP based on other factors, such as cost and difficulty, not speed. Until now, IP was the undisputed low-cost champion, but the FC industry insists it's addressing the cost issue by cutting prices on some FC components, especially at the low end. "I've seen 2Gb/sec FC adapters priced at $345 retail," says Emulex's Scherer. That may be an improvement, but it remains higher than IP's $50 NIC card.
Before jumping on the 10Gb/sec Ethernet bandwagon because of price, add in the cost of a TOE to relieve the server of the overhead of processing the TCP/IP stack. At 1Gb/sec, a TOE isn't a factor because today's servers can absorb the overhead. "The rule of thumb is that it takes 1Hz of CPU to drive 1 bit per second," says Herman Chao, senior manager of product marketing in QLogic Corp.'s advanced technology and planning group. That translates into 1GHz of CPU to process a 1Gb/sec storage stream.
At 10Gb/sec, however, you'd need 10GHz of processing power. "For 10Gb/sec Ethernet to be viable for storage, you need a TOE," concludes Chao. This isn't an IP deal breaker, and Chao expects the cost of TOEs to come down as demand ramps up. The cost of the TOE still shouldn't be overlooked.
In terms of ease of use, IP and iSCSI hold the advantage over FC. Even a novice network admin knows IP, and iSCSI vendors have made great strides adding GUI interfaces to their products. The FC industry is working on the difficulty issue, says Tom Hammond-Doel, treasurer and membership chairman of the FCIA and director of technical marketing at Emulex Corp., but he couldn't cite specifics.
End-to-end storage performance involves a number of components, including the communications link. Fast servers--and now fast links--will still be slowed by the disk array.
"Disk drives are a fundamental bottleneck," says Taneja. "Their ability to transfer data is already limited, and it will get worse as the pipes get bigger." Users can move to faster disk drives, but beyond 15,000 rpm, head vibration becomes a problem. Solving the disk bottleneck requires putting bigger buffers in front of the drives and using more drives in parallel.
Whether or not you're ready to take advantage of them, faster network links have arrived with even faster ones on the way. One technology isn't about to replace the other, at least in the foreseeable future. Companies will just have more options. Sun Health's Ronen, for instance, already knows what he'll pick.