Fibre Channel is most popular networking choice with flash-based storage

Fibre Channel is the top storage networking technology choice for customers of all-flash arrays and hybrid systems, according to a survey of storage vendors.

Fibre Channel is the most popular networking technology in use with flash-based storage arrays, according to the...

vendors who sell them.

Eight of 11 of storage vendors who responded to a TechTarget survey said the majority of their enterprise customers use Fibre Channel (FC) switches and adapters with their all-flash arrays and hybrid systems, which combine solid-state drives (SSDs) and hard disk drives. EMC and Hewlett-Packard said many customers upgrade to 16 Gigabit per second (Gbps) FC with the transition to flash storage.

Only two flash system vendors, SolidFire and Tegile Systems, said most of their customers use 10 Gigabit Ethernet (GbE). Nimbus Data Systems claimed an even split between FC, Ethernet and InfiniBand.

Nimbus CEO Thomas Isakovich said most customers attach the company's Gemini all-flash arrays to their existing infrastructure, but the fastest network technology is crucial to achieve the maximum performance possible. He claimed users with mission-critical databases frequently opt for an ultra-low latency architecture with Remote Direct Memory Access (RDMA) attributes, such as InfiniBand or 40 GbE with iSCSI extensions for RDMA (iSER).

But most storage vendors do not currently support InfiniBand and 40 GbE with their flash systems. Even the few vendors that do find their customers sticking with the familiar FC SAN architectures have been using them for years.

Violin Memory supports FC, Ethernet and InfiniBand but said approximately 80% of its customers want FC with their all-flash arrays. IBM sees deployments of 40 Gigabit InfiniBand with its FlashSystem 840 in high-performance computing environments, but the majority of the company's flash customers also go with FC storage networking.

"A few clients cross over from Fibre Channel to Ethernet and vice versa, but they are in the small minority," Kevin Powell, business line program director of IBM FlashSystem, wrote in an email. "In most cases, clients choose the interface that fits best with their existing environment -- either the current speed iteration or the next step up."

"Because flash can drive more IOPS with lower latency, you may find more incentive to move to faster Fibre Channel speeds," Erik Ottem, director of product marketing at Violin, wrote in an email.

Mark Welke, senior director of product marketing at NetApp, estimated that 80% to 90% of the company's flash customers use FC with databases. He said most move up to 16 Gbps FC, the highest available speed, with NetApp's EF-Series and FAS arrays.

"It's not only the performance," Welke said. "It's also reliability. Dynamic multi-pathing capability is built into the FC. It's very important. So that combination and the fact that it's been a standard out there for years make it the likely choice that customers are using."

Networking refreshes vary based on performance, storage requirements

Jon Siegal, vice president of product marketing in EMC's core technologies division, said application and use case tend to dictate the storage networking type. EMC's VMAX arrays support 16 Gbps and 10 GbE, and the company's XtremIO all-flash arrays currently support 8 Gbps FC and iSCSI over 10 GbE. Approximately 80% of the flash-based storage customers use FC, and the remainder use iSCSI over 10 GbE, according to EMC.

Siegal advised customers to evaluate their existing SAN networking technologies when adding flash-enabled storage systems to determine if they can fully support the higher bandwidth and performance. He said it might be prudent to refresh the SAN switches at the same time.

Early beta customers of Oracle's FS1 Series storage system saw flash shift performance bottlenecks to the surrounding network infrastructure. The switches and adapters had trouble pushing the load fast enough into the FS1 flash arrays to exercise the SSDs in the storage pool, according to Bob Maness, vice president of product management at Oracle.

Maness said 16 Gbps FC, InfiniBand and 10 GbE can all work well with flash storage. But he said 8 Gbps FC starts to limit what users can get out of flash as they scale out the system. He said most customers follow the natural progression from 8 Gbps FC to 16 Gbps FC.

Dell, however, claimed that 8 Gbps FC tends to perform well for most of its customers. Upgrades more commonly happen with iSCSI networks, where GbE is more limiting to overall performance, said Bob Fine, director of product management for Dell Storage.

Whether they use FC or Ethernet, Pure Storage's customers tend to refresh their core networking infrastructures on a schedule that does not align with updates of their storage infrastructure, according to Vaughn Stewart, chief technology evangelist at the company. Stewart said there is a correlation with converged infrastructure, such as Pure's FlashStack and Cisco.

Ethernet attracts the midmarket

The breakdown of storage networking technologies among Pure's customers is 80% FC and 20% iSCSI Ethernet, with iSCSI the top choice of smaller customers that tend to buy entry-level systems, according to Stewart.

The dominance of Ethernet-based storage networking with the flash customers of Tegile Systems stems in part from the company's tendency to sell to the midmarket, according to Russ Fellows, a senior partner and analyst at Evaluator Group. A major differentiator is Tegile's support for FC, iSCSI, NFS and CIFS/SMB3 on every array.

Rob Commins, vice president of marketing at Tegile, estimated that 70% of the company's customers were using Ethernet storage networking before moving to flash-based systems. He said half of the remaining customers stay on FC, and the other half move to 10 GbE iSCSI.

SolidFire's initial focus was large cloud providers that tend to have newer data centers with IP-only infrastructures, and the company didn't add support for FC until last year, noted George Crump, president and founder of Storage Switzerland.

Evaluator Group's Fellows added, via an email, "Since each of their systems is relatively small, using iSCSI doesn't slow things down. They are meant to be accessed as pools of four to 10 nodes. SolidFire is scale-out, so each node really doesn't need two 16 Gbps FC or even two 10 GbE links."

SolidFire's all-flash arrays speak iSCSI natively for both the client connectivity and the communication between nodes, and the vast majority of customers use 10 GbE top-of-rack switches to deploy iSCSI directly out to virtualized hosts or physical servers connected to the storage array, according to Jeramiah Dooley, a cloud architect at SolidFire.

Dooley said customers are trying to simplify their networking and use a single 10 GbE fabric to handle data traffic and storage traffic. Even the company's FC customers are not looking to expand or upgrade their FC investments, he claimed.

"First and foremost, iSCSI over 10 GbE is the primary design that we recommend for all customers," he said.

Other flash storage vendors are more hesitant to make strong recommendations. They support FC and Ethernet, and many are happy to offer advice based on the individual customer's environment, but they don't advocate a storage networking preference.

"We're certainly not going to tell them to rip out their infrastructure if they're able to achieve the performance results with what they have," NetApp's Welke said.

Next Steps

Comparing all-flash and hybrid storage arrays

Fibre Channel remains the top networking choice

Dig Deeper on Enterprise storage, planning and management

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

16GB FC may mean increased throughput over 10GB ethernet but not everyone will be able to afford new FC switches. How about 40GB ethernet throughput?
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchConvergedInfrastructure

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close