Feature

Is there a need for more speed?

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Storage salary survey: Are you being paid enough?."

Download it now to read this article plus other related content.

Navy wants 40Gb/s--soon
Don't try to tell Hank Dardy that 4Gb/s and 10Gb/s Fibre Channel

Requires Free Membership to View

(FC) constitute overkill. The U.S. Navy's chief scientist for advanced computing says current-generation 2Gb/s FC isn't fast enough to support the advanced visualization applications he's building today. He needs 10Gb/s bandwidth now, and he can see a need for 40Gb/s in the not-too-distant future. He believes the best bet for delivering the low-cost interconnect bandwidth he needs may turn out to be InfiniBand, not FC.

Dardy is heading up a Navy effort, known internally as .Geo, that involves prototyping and building a large visualization infrastructure and applications for the Department of Defense and scientific researchers. Rather than replicating the large image files that feed the visualization application, Dardy's team is building a system that can access them remotely via a distributed file system, then stream them directly over a high-bandwidth WAN.

"We don't have a copy of everything here, but we have active agents out on the network telling us that something of interest is available, and it can be immediately put to use," says Dardy.

Dardy's problem is that the high-definition, TV-quality motion imagery files his group is dealing with are several terabytes each and they easily overwhelm 2Gb/s FC SAN devices. On top of that, users often want multiple visual streams flowing at the same time. So Dardy is pushing vendors for higher-bandwidth alternatives. Without higher-bandwidth SANs, he says, .Geo simply won't fly.

"A doctor, scientist or military officer isn't going to wait on the data," says Dardy. "They'll say, 'The application isn't real-time enough.' We're trying to change the paradigm, which puts a lot of emphasis on data and storage."

Dardy's team has gotten around FC bandwidth limitations by stripping data across multiple Silicon Graphics 2Gb/s SANs and blasting through parallel switches and ports to produce cumulative bandwidth of up to 10Gb/s. The problem, says Dardy, is that it's expensive and complex.

"With 10Gb, we could replace all those boxes with just one and extend the FC out," says Dardy, whose team last year demonstrated a 10Gb/s fiber link running between data centers in Washington, D.C. and Baltimore.

Dardy acknowledges that 10Gb/s FC ports are expected to be several times the cost of 2Gb/s ports. That's why he's pushing for InfiniBand for interconnecting SANs.

"Already, InfiniBand is cheaper than most people think; 4x InfiniBand [about 10Gb/s] ports today are about $1,000, which is competitive with 2Gb FC," Dardy says.

Dardy predicts that InfiniBand will be the best bet for offering affordable SAN fabrics operating well above 10Gb/s. "What we're trying to say to vendors is that 10Gb is just the tip of the iceberg," says Dardy. "We need to go further. I see us needing 40Gb in the 2005 to 2007 time frame."
Henry Ford reportedly told buyers they could purchase his Model A automobiles in any color, as long as it was black. For Fibre Channel (FC) storage area network (SAN) buyers, the situation has been almost as simple: They can buy SAN components such as switches, host bus adapters (HBAs) and storage arrays that operate at any speed, as long as it's 1Gb/s or 2Gb/s.

SAN buyers, however, will soon be faced with more choices when it comes to bandwidth and performance options. FC switch and HBA vendors are expected to begin shipping 10Gb/s SAN gear as early as the first quarter of next year. At the same time, the Fibre Channel Industry Association (FCIA) has backed the idea of a 4Gb/s standard for traffic over the SAN network.

The debut of 4Gb/s HBAs, switches and controllers is expected late next year or early in 2005. Recently, momentum has been building at the FCIA and among some vendors for an 8Gb/s standard. FCIA board member Art Edmonds says an association vote on an 8Gb/s SAN standard will "more than likely" take place.

It begs the question: Will enterprises welcome all the new higher-performance SAN choices? And are storage managers impatient for higher-bandwidth SANs?

In a word, no. Many enterprises have only recently migrated or are in the process of migrating from 1Gb/s to 2Gb/s SANs. Most say their existing applications, OS and server bus architectures are far from consuming all of their SAN bandwidth. While rising storage volumes and the arrival of bandwidth-intensive data types will generate a need for higher-bandwidth SANs, that day is far off.

As a result, most analysts and vendors say that while 4Gb/s SAN gear may find broad application, 10Gb/s ports will be used mainly to speed up inter-switch links (ISLs) between SANs, at least initially.

Need not there yet
"We don't see any bottlenecks today that would require us to upgrade to 4Gb," says Alex Lopez, SAN architect at the University of California Davis Medical Center. Lopez, who runs two SANs containing a total of 24TB of data, migrated to 2Gb/s ports nine months ago as part of a larger SAN and server upgrade. The servers, which use PCI, don't come close to using up all of the SAN's bandwidth. "So, unless we add a supercomputer or multiple video applications, we won't have a need for 4-gig or 10-gig SANs for a long time," says Lopez.

Mark Deck, director of infrastructure technology at National Medical Health Card Systems Inc., feels the same way. The prescription drug program management company is currently upgrading its Hewlett-Packard Co. (HP) N-class servers to faster K-class machines and its HP XP256 SAN from 1Gb/s to 2Gb/s to keep up with online transactions. But he says that even the new servers won't fully take advantage of the increased SAN bandwidth.

Even vendors backing higher-bandwidth SANs say most enterprises aren't clamoring for them yet. While HBA vendor Emulex, for example, expects sales of its highest-performance dual channel 2Gb/s HBAs to double this year, "it would be overstating it to say that there's a strong pull for 4-gig SANs," says Mike Smith, the company's executive VP of worldwide marketing.

Many SAN switch vendors agree. "At this point, 4Gb/s appears to be a technology in search of a solution," says Jim Miller, senior product marketing manager at McData Corp., which plans to support the 10Gb/s standard for ISLs in the first half of 2004, followed in about nine months by 4Gb/s if there's demand. "But," Miller adds, "we see nothing on the horizon, even tape SANs, that will be able to drive that [4Gb/s]."

Gearing up for 4Gb/s
Why the vendor push for higher-bandwidth SAN products--4Gb/s, specifically--even in the face of apparent enterprise-user indifference? First, intense competitive pressures are prompting vendors to attempt to get a leg up on rivals by pushing for 4Gb/s SANs. QLogic Corp., for example, aggressively pushed for the FCIA vote in favor of 4Gb/s FC and expects to have HBAs for evaluation early next year, well ahead of rivals. Although QLogic marketing VP Frank Berry acknowledges there's little demand for 4Gb/s FC SANs, he says that there eventually will be.

QLogic's strong pro-4Gb/s stand has forced competitors to consider supporting the standard or risk the appearance of falling behind. "A year ago, I'd have said Fibre would move directly from 2Gb to 10Gb, but today that's not so clear," says Emulex's Smith. "I've got to be ready to do both, then wait and see which camp is going to blink."

"With 1GB iSCSI becoming widely available in 2004, the low end of the market will move in that direction, so FC vendors need a growth path they can offer higher-end customers," says Phil Brotherton, VP of marketing at HBA maker JNI Corp., which expects to ship a 4Gb/s device in the second half of next year. "Competition from other technologies is definitely a factor driving the proliferation of higher-bandwidth FC options."

The third reason for the emergence of the 4Gb/s Fibre standard: It's becoming clear to some vendors that 10Gb/s FC will be expensive and difficult to deploy. Rather than fight an uphill battle by attempting to sell a 2Gb/s-to-10Gb/s SAN migration, many vendors are proposing an easier interim alternative with 4Gb/s.

The high cost of 10Gb/s
"Ten-gigabit is going to be a disruptive beast and one that is going to be incredibly costly," says storage analyst Arun Taneja of the Taneja Group, Hopkinton, MA. "In fact, based on the cost differential we see between Gigabit Ethernet and 10GbE, 10Gb Fibre could end up being at least five times more costly than 2Gb Fibre."

And that estimate could be optimistic. Ed Chapman, senior director of product management at Cisco, believes that initially, the cumulative cost difference between 2Gb/s and 10Gb/s SAN infrastructure--HBAs, switches, directors, drives and cabling--will be closer to 10-to-one. The higher cost of optical devices needed to drive 10Gb/s FC traffic is a major factor. Based on today's prices, optics for a 10Gb/s FC SAN would cost about $2,000, compared with the current price of $80 for the optics used in 1Gb/s and 2Gb/s FC SANs, according to Peter Wong, manager of strategic marketing at networked storage component manufacturer PMC-Sierra, in Santa Clara, CA.

"Originally, the thinking was that 10Gb Ethernet would drive up volumes and quickly drive down the cost of 10Gb fiber optics and other components, but that hasn't happened," says Wong. "The volumes for 10GbE haven't been there."

As a result, says Wong, PMC-Sierra expects 10Gb/s prices to fall slowly and in the near term, he sees a small market for 10Gb/s FC products. The company has scaled back the level of its investments in 10Gb/s FC components, shifting its emphasis to 4Gb/s products.

But it's not just the cost of 10Gb/s Fibre ports that will pose a barrier to the technology's wide deployment within SANs. Incompatibilities between 10Gb/s and existing 1Gb/s and 2Gb/s SAN infrastructure will also hinder migration, experts say. For one thing, most switch and HBA vendors say it won't be practical to work 10Gb/s Fibre elements into the existing auto-sensing scheme that today lets switches detect whether they are connected to 1Gb/s or 2Gb/s HBAs and disk arrays, and then adjust accordingly. As a result, it may not be easy to mix 10Gb/s-capable gear into existing SANs.

Also, it's expected that 10Gb/s SAN equipment won't be able to use the multimode fiber cabling currently tying together most SANs.

"All of that would have to be replaced with more expensive nine-micron single-mode fiber cabling," says Scott Drummond, program director for storage networking at IBM. "All of those things mean a migration to 10Gb would take more planning and be more expensive than what enterprises saw moving from one to 2Gb."

As a result of the high cost and the compatibility issues surrounding 10Gb/s Fibre, many vendors and analysts now believe it will not be deployed soon as a mainstream replacement for 2Gb/s Fibre linking servers, switches and storage arrays. Instead, say vendors and analysts, 10Gb/s Fibre will make the most sense as a backbone technology, speeding up ISLs for applications such as SAN consolidation and remote backup and data movement. Using 10Gb/s for ISLs could also cut complexity and cost, compared with the current practice of trunking together multiple 2Gb/s links, experts say.

"Speeding up [inter-switch links] is where you'll see the real use of 10Gb Fibre in the beginning," says IBM's Drummond. "We're seeing larger and larger SANs, some with 1,000 ports. That will drive the need for higher-speed ISLs in the near future."

Today, enterprises can use SAN extension routers from vendors such as Computer Network Technology (CNT), LightSand and Nortel Networks to merge multiple SAN fabrics and connect them over wide-area Sonet networks. Using technologies such as dense wavelength division multiplexing has allowed vendors to squeeze more bandwidth out of WAN connections such as Sonet. Feeding those fast WAN connections with 10Gb/s Fibre rather than 2Gb/s would mean more efficient use of expensive Sonet bandwidth. "For ISLs, 10Gb means simplicity," says Drummond. "It means fewer boxes and less money."

Still, vendors acknowledge that only their largest customers are experiencing bottlenecks while using 2Gb/s Fibre connections to feed ISLs. (See "Navy want 40Gb/s soon")

This was first published in December 2003

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: