News Stay informed about the latest enterprise technology news and product updates.

Re-evaluate network with move to enterprise flash storage

A move to enterprise flash storage is a good time to review the networking decisions made years ago with slower disk systems in mind.

Moving to enterprise flash storage? That might be an opportune time to take a look at the storage networking decisions...

made years ago with slower disk-based systems in mind.

Flash storage can drive greater IOPS at lower latency and enable users to consolidate more virtual servers, databases and enterprise applications onto a single array. But IT organizations need to make sure they don't shift the bottleneck from the backend storage to the surrounding networking infrastructure.

Many storage consultants, analysts and vendors advise a minimum of 8 Gigabit per second (Gbps) Fibre Channel (FC) and if possible, 16 Gbps, also known as Gen 5, to fully take advantage of flash storage. With Ethernet, they recommend 10 Gigabit and perhaps casting an eye toward 40 Gigabit Ethernet (GbE) down the road when more storage systems support it. They expect ultrafast but less familiar InfiniBand, with raw speeds of 40 Gbps and 56 Gbps, to see limited adoption primarily with the most latency-sensitive workloads.

"My general rule with respect to storage performance is that you don't want the components in the middle to be the bottleneck," said Dennis Martin, president and founder of Demartek LLC, an analyst organization that operates an on-site test lab in Golden, Colo.

Martin said any of the existing types of storage switches and adapters can work well with all-flash arrays (AFAs), as long as they can support the needed bandwidth and latency. He said the latencies of AFAs, at hundreds of microseconds, are approaching those of switches and adapters, with latencies ranging from low double-digit microseconds to hundreds of nanoseconds.

Demartek runs AFAs with 8 Gbps and 16 Gbps FC switches and 10 GbE switches. Martin said that they easily hit the line rate of one or two host ports from a single server with bandwidth-sensitive workloads connected to the all-flash enterprise storage array.

He recommended host bus adapters (HBAs) and network interface cards (NICs) of at least 8 Gbps when connected to all-flash enterprise storage. Martin said that in addition to higher performance, the newer HBAs and NICs generally have better chipsets and firmware than prior generations.

As flash matures, bandwidth demand grows

In the early days of enterprise flash storage, IT organizations tended to use AFAs for a single IOPS-hungry application. As they start to run multiple databases and applications on all-flash storage systems, they may be more likely to use the extra network bandwidth.

Russ Fellows, senior partner and analyst at Evaluator Group, said the minimum speed that a flash storage user needs can vary. He said a shift to 4 Gbps FC might be sufficient for a company that was using 1 GbE.

"It makes no sense to deploy all-flash over a one-gig [Ethernet] interface. But it also may not make sense to upgrade from eight-gig Fibre Channel to 16 just to accommodate a single flash system, particularly if the storage network is not creating bottlenecks," Fellows wrote via an email.

Scott Sinclair, a storage analyst at Enterprise Strategy Group, said the number and sophistication of the applications can have an impact on the storage network. He recommended that enterprise flash storage users invest in performance monitoring tools to help them understand network performance and identify bottlenecks before they upgrade to a higher-speed networking technology.

Alan Weckel, vice president at Dell'Oro Group, said for the most part, flash storage is currently in use with 8 Gbps FC and 10 GbE. One especially demanding workload that could drive the adoption of higher speed networking technology is big data analytics, where users will do whatever they can to get the most out of their storage and network, Weckel said.

George Crump, president and founder of Storage Switzerland, recommended that IT organizations with existing FC investments stay with FC for flash. He said Ethernet users should evaluate their performance requirements, and if they're not pushing the limits of 10 GbE, they can stick with iSCSI. The remaining 10% that are pushing the envelope of 10 GbE should consider an investment in FC or look to the new wave of 40 GbE technologies, according to Crump.

Crump said, although InfiniBand is "almost tailor-made for flash," the technology will have a difficult time catching on without a significant installed base. He said IT organizations tend to upgrade their storage networking infrastructure on at glacier-like pace.

"The most important thing in your networking protocol decision is what you have today and what you are comfortable with," Crump said.

Predictions vary on the future of FC

Crump grew up in the FC era and acknowledged that he likes FC with solid-state storage. He claimed that FC is better tuned for flash and unburdened by the overhead associated with Ethernet. Crump predicted that once 32 Gbps FC is available, FC will perform at least as well if not significantly faster than 40 GbE. He added that the Ethernet switches and converged network adapters that enable lossless Ethernet through data center bridging enhancements carry a price tag that starts to approach the cost of FC network gear.

By contrast, Marc Staimer, president of Dragon Slayer Consulting said he never recommends new implementations of FC because of the dearth of knowledge on how to operate an FC fabric. He noted that FC revenue and port counts are on a slow decline and predicted that FC will be no more than a footnote in 10 years. He said flash storage might keep FC from a rapid descent, but it will not stop the gradual downward trajectory.

Staimer said if latency is the most important consideration, users should go with server-based flash. Those with storage-based networks should first consider Remote Direct Memory Access (RDMA) with InfiniBand and RDMA over Converged Ethernet before FC, according to Staimer.

"In the short term, you're going to see more Fibre Channel upgrades because it's easier and it's well known," Staimer said. "Long term, you'll find almost all storage systems that have flash support converged Ethernet, which is going to give you even lower latency than Fibre Channel."

Chris DePuy, vice president at Dell'Oro, said that although FC switches have moved to 16 Gbps on the uplink side, the number of storage array ports at 16 Gbps is currently small. He said users with lots of small files can continue to use lower-speed ports and equip their arrays with more ports. Higher-speed connections such as 16 Gbps help transfer large files in a short amount of time, he said.

"The 16-gig ports are downward compatible to 8-gig, so you can take a 16-gig switch and plug it into an 8-gig array," DePuy added.

Brocade Communications Systems has a Solid State Ready Program for flash and hybrid array vendors to test systems with its FC and Ethernet switches. Brocade claimed customers have been upgrading their networks to 16 Gbps FC in many cases when they move to all-flash or hybrid arrays.

"Most people who are deploying flash are looking for very high levels of performance -- high numbers of IOPS, very low latency -- and 4 Gbps FC absolutely does not do it," said Jack Rondoni, vice president of storage networking at Brocade. He said customers might be able to get away with 8 Gbps FC, "but the vast majority of people that we're seeing in those deployments are using 16 Gbps FC."

Rajeev Bhardwaj, vice president of product management for data center switching and storage products at rival Cisco Systems, said customers tend to deploy enterprise flash storage in separate pods and buy 16 Gbps FC because they're deploying a new infrastructure and the newer technology is similar in cost. But Bhardwaj also sees flash in use with 8 Gbps FC, the minimum speed he recommends.

ISCSI Ethernet use starts in the midmarket with flash, according to Bhardwaj.

"When you look at flash in the enterprise segment, we primarily see Fibre Channel as the connectivity of choice," Bhardwaj said. "Fibre Channel is proven. It's robust. It's mature. The ecosystem is there."

Next Steps


Enterprise flash storage grows up

What to look for in an all-flash storage system

Dig Deeper on Solid-state storage

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Does your company plan on moving to enterprise flash storage this year?
Cancel
Huh, we used a Load Dynamix box and 1) it took way too long to get it work right (read: 1.0 product with plenty of bugs) and 2) could just as easily use IOmeter, Vdbench or other free load generators like the test labs do themselves. Whatever. Maybe the mods should limit blatant advertising in comments.
Cancel
Agree on blatant ads. But yeah, the early version of LoadDynamix/Swifttest was rough around the edges. We stuck with it and I'll never go back to Iometer. BTW, we did get some surprising results from the FC vs. iSCSI testing. Don't depend on other people's benchmarks.
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close