Feature

Storage networking infrastructure trends must figure in upgrade plans

Enterprise IT shops looking to consolidate data centers and virtualize their servers will need to weigh the latest storage networking technology developments

    Requires Free Membership to View

as they plot any infrastructure upgrade.

Here’s a snapshot of the current state of data storage networking infrastructure for Fibre Channel (FC), Ethernet, Fibre Channel over Ethernet (FCoE) and InfiniBand in data centers.

Fibre Channel

Long-term roadmaps mention 64 Gbps FC, there are standards in the works for 32 Gbps FC and 16 Gbps products started shipping last year. But analysts say IT organizations are likely to mostly stick with 8 Gbps FC for at least two years when they purchase new switches and host bus adapters (HBAs).

Redwood City, Calif.-based Dell’Oro Group Inc. noted that 8 Gbps accounted for 89% of Fibre Channel switch port shipments in 2011. The research firm projects 8 Gbps will represent 77% of FC switch shipments this year. In 2013, 8 Gbps and 16 Gbps will split the market at 50% apiece as the price premium for 16 Gbps technology starts to dissolve, according to Dell’Oro.

But that doesn’t mean IT shops are rushing to retire their old switches and adapters. All manner of FC technology -- from 1 Gbps to 16 Gbps -- is currently in use, and 4 Gbps remains the most prevalent line speed, according to recent TechTarget Storage Purchasing Intention surveys.

Customers tend to buy the highest speed available when they buy new infrastructure, as their aging equipment reaches end of life and the cost of the latest Fibre Channel iteration gets close to the price of the prior generation. Many skip an upgrade cycle, as, for example, Western University (previously University of Western Ontario) in London, Ontario, is doing with its move from 2 Gbps to 8 Gbps FC.

Although 16 Gbps FC is valuable with heavily virtualized environments, high-transaction databases, blade servers and solid-state storage, many IT organizations are hard-pressed to use the bandwidth they have today.

“It’s not like you find data centers that have used up all their 8-gig bandwidth and are completely distressed because of that and need to go to 16 gig. That’s usually not the case,” said Gene Ruth, a research director of storage technology at Stamford, Conn.-based Gartner Inc. “It’s bandwidth looking for an application.”

Dean Flanders, head of informatics at the Friedrich Miescher Institute (FMI) for Biomedical Research in Basel, Switzerland, said the company uses approximately 25% of its 8 Gbps FC, and needs tend to be “bursty.” FMI runs about 60 virtual machines (VMs) per server over 8 Gbps, and it has two 8 Gbps links in an inter-switch link (ISL) trunk, giving it 16 Gbps for each fabric between two data centers.

“The setup is overkill at the moment,” Flanders wrote via email.

Kemper Porter, systems manager in the Data Services Division of Mississippi’s Department of IT services, said the group uses two 8 Gbps Fibre Channel connections per server and a multipath driver, so it gets 16 Gbps. But he’s seen peaks only of 3 Gbps on a few servers. The department uses FC primarily with its virtual servers.

Scott Shimomura, group manager of product marketing for data center SANs at Brocade Communications Systems Inc., claimed the financial, health care and entertainment industries see a need for more bandwidth. Brocade began rolling out 16 Gbps products last May and reported that 27% of its FC director sales in the fourth quarter of 2011 were 16-gig.

Cisco Systems Inc. declined to specify when it plans to add support for 16 Gbps. Emulex Corp. added support last year for 16 Gbps HBAs, and QLogic Corp. said OEM design qualifications are in progress.

Ethernet

Cutting-edge 40 Gigabit Ethernet (GbE) and 100 GbE products may be trickling into the storage marketplace, but most IT organizations are working on the major upgrade to 10 GbE, with new cables, adapters and switches, and potentially even a redesigned storage networking infrastructure.

“The transition from 1 Gigabit to 10 Gigabit is happening,” said Arun Taneja, founder and consulting analyst at Taneja Group in Hopkinton, Mass. “But like any of these migrations, it's happening at a slower pace than all the early voices you heard eight, nine years ago.”

More on storage networking infrastructure

Fibre Channel remains leading storage networking technology

VDI and converged storage networking

Benefits and disadvantages of virtualization storage networking with iSCSI

Research from Milford, Mass.-based Enterprise Strategy Group (ESG) shows the top reason for moving from Gigabit Ethernet to 10 GbE is server virtualization. The need for extra bandwidth could escalate as IT shops move to servers that use Intel’s new Xeon process E5-2600 processors, which can run more VMs per physical server, said Bob Laliberte, a senior analyst at ESG.

“A lot of organizations in the next few years are planning on having more than 25 VMs per physical server,” he said. “As that happens, higher throughput is going to be required to accommodate all the traffic from those virtual machines.”

Stuart Miniman, principal research contributor at Wikibon, a community-focused research and analyst firm based in Marlborough, Mass., expects adoption to increase with the embedding of 10 GbE technology in next-generation rack and tower servers. For the past couple of years, 10 GbE was mostly found in blade servers, he said.

IT shops have several options for 10 GbE adapters, including network interface cards (NICs), converged network adapters (CNAs, which can function as TCP offload engines), local-area network (LAN) on motherboard (LOM), CNA on motherboard and CNAs embedded as mezzanine cards in blade servers. Cisco, the leading 10 GbE switch vendor, announced support for 40 GbE and 100 GbE in January.

Fibre Channel over Ethernet

With standards work complete, the limitations of Fibre Channel over Ethernet have largely dissolved. IT shops can now deploy lossless 10 GbE and FCoE technology end to end, from servers to storage, with multi-hop, switch-to-switch FCoE even in the network core.

But, in practice, most early adopters still use FCoE only between their servers and top-of-rack switches, according to industry analysts. The top-of-rack switches split the IP/LAN traffic and the FC/storage-area network (SAN) traffic, with the storage traffic continuing via Fibre Channel to core switches and storage arrays.

Dell’Oro predicts that FCoE switch port shipments will shoot up 139% this year, as revenue climbs 110%, from $437.3 million to $919.6 million. The market research firm also foresees 74% growth in revenue and 78% growth in switch port shipments for 2013. In both years, fixed-configuration switches (including top-of-rack and embedded blade server) comprise the majority, as opposed to modular chassis-based, director-level switches.

“As 10-gig adoption ramps up, FCoE is going to draft on that wave,” Wikibon's Miniman said. “FCoE isn’t going to crush everything out there, but it’s gaining steady adoption.”

The main switch vendors have supported FCoE in fabric switches since 2009, but last year Cisco made available multi-hop, switch-to-switch capabilities with the addition of FCoE support for its MDS 9500 Series Multilayer Directors (8-port module) and Nexus 7000 (32-port module). Last year, Brocade made available multi-hop FCoE with its fixed-port VDX switches, but it has yet to support multi-hop FCoE in modular director-class switches.

CNAs ship in a variety of form factors, including adapter cards, mezzanine cards for blade servers and converged LAN on motherboard (cLOM).

Gartner’s Ruth said the market may continue to lag until more full-function FCoE storage targets emerge beyond systems from vendors such as EMC Corp. and NetApp Inc.

FMI's Flanders said FCoE looked interesting, but after extensive evaluation he saw no advantage of convergence given the company’s existing FC SAN. FMI has also used iSCSI for more than five years, but Flanders plans to phase it out because the performance and reliability can’t match that of Fibre Channel.

“It’s not getting as much traction as people were hoping for,” Taneja Group's Taneja said of FCoE. “Maybe it’s just the conservative nature of the storage IT people. FCoE will probably take hold for brand-new projects, which are more tied to the unified fabric effort.”

ESG's Laliberte will be looking to see if VMware’s certification of FCoE spurs greater adoption, a milestone he thought proved influential in the uptake of network-attached storage (NAS) and iSCSI. VMware supported only FC storage until 2010, he said.

InfiniBand

Low-latency InfiniBand still finds its greatest use as a server-to-server interconnect, especially in high-performance computing (HPC) and financial trading applications. But the technology is starting to pick up momentum in back-end storage.

Brian Sparks, senior director of marketing at Mellanox Technologies Ltd., claimed that remote direct memory access (RDMA) -- which allows server-to-server data movement without CPU involvement -- is gaining acceptance in storage. According to Sparks, data storage vendors that use InfiniBand include DataDirect Networks Inc., EMC (with Isilon), IBM (with XIV), NetApp (with LSI/Engenio) and Oracle Corp. (with ZFS, Exadata, Exalogic and Exalytics).

Mellanox, the leading InfiniBand vendor, began supporting 14 data rate (FDR) 56 Gbps InfiniBand adapters, switches and cables last year, Sparks said.

Former No. 2 InfiniBand switch vendor QLogic sold its TrueScale InfiniBand assets to Intel Corp. earlier this year. QLogic attributed the decision to its desire to focus on converged networking, enterprise Ethernet and SAN products. Intel cited its commitment to the HPC community and its vision of fabric architectures that can achieve a quintillion computer operations per second (ExaFLOP/s) by 2018.


This was first published in March 2012

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: