Figuring out when 16 Gbps Fibre Channel storage networking makes sense

Moving to 16 Gbps FC makes sense for IT shops with new and heavily virtualized servers, solid-state storage and test firm president advises.

Does a move to 16 Gbps Fibre Channel storage networking make sense, when some SAN users are hard-pressed to find a need for the bandwidth that 8 Gbps offers?

Even though 16 Gbps Fibre Channel (FC) products have been shipping for about two years, the question is still a relevant one for many IT organizations, according to Dennis Martin, founder and president of Arvada, Colo.-based Demartek LLC, a computer industry analyst firm that operates an on-site test lab. Martin spoke with SearchStorage to discuss the pros and cons of moving to 16 Gbps FC based on extensive testing of FC storage networking technology.

Fibre Channel users often upgrade as part of refresh cycles, as the price differential for the next-generation of the storage networking technology drops. Will they get much benefit by going from 8 Gbps FC to 16 Gbps FC?

Dennis Martin: The short answer would be yes. We've tested a lot of 16 Gbps FC equipment from many companies. Obviously you get twice the amount of bandwidth in the same number of cables, same number of adapters, that sort of thing. Also, 16 Gbps is a little bit more power efficient. Latency is a little bit better with 16 Gbps -- and not just half of the latency of 8 Gbps FC, but actually better than that. Of course, 16 Gbps is certainly compatible with the new servers, so if you're in a refresh cycle, you want to get it.

There are some diagnostic improvements with 16 Gbps FC technology. There are retimes and transmitter training functions in the optics that run for 16 Gbps FC that improve the characteristics and reduce electronic dispersion. It's kind of technical. They just make it run better and faster. In addition, Brocade's 16 Gbps FC switches support a new "D_port" type that can perform diagnostic functions, which is handy for testing optics, cables, etc. before placing them into the production fabric. Overall, there's more diagnostic capability now than there was with the previous generations of Fibre Channel.

For which applications, workloads or situations do you think 16 Gbps FC is most important?

Martin: If you put flash on the back end, you're going to certainly use the bandwidth that maybe you hadn't thought about before. If you just have hard drives on the back end, you might not use the bandwidth as much because the hard drives are limited in their performance compared to solid-state technology.

Some of the smaller form-factor servers only have a limited number of slots in them. If you want to do FC and you only have one slot to play with, then you had better get as much bandwidth as you can out of that one slot. So, 16 Gbps is a great answer there.

From an application standpoint, certainly there are some databases and database applications that will take as much machine as you'll give them, especially if you've got flash or a very large amount of hard drives on the back end, so 16 Gbps FC is a good choice.

As we virtualize, we put more and more virtual machines (VMs) in the same server. Just the aggregate of all those little VMs can use up that bandwidth. By themselves, each one running its own application wouldn't need as much bandwidth. Also, if you've got 100 virtual desktops in one system and they all need to access the storage, you need more bandwidth.

Are there any scenarios where you would advise a FC user against the upgrade to 16 Gbps?

Martin: If you're happy with what you've got, and you're not saturating what you've got, then maybe you don't need to get it. If you have a really old server running an old app that's not critical, and it's doing fine with the old stuff, and there's no compelling reason to change that, then maybe you wouldn't upgrade. Of course, if it's an old server, it would be a waste to put a 16 Gbps card in there anyway because it wouldn't be able to use it.

But otherwise, if it's new stuff, I would go with 16 Gbps FC. In general, things do tend to fill up all available bandwidth over time. We used to think about memory in servers in kilobytes, and then we finally went to megabytes, and now we're at gigabytes or talking about terabyte RAM systems. As we continue to virtualize, add solid state storage, decrease our server physical size and grow the amount of data we process, 16 Gbps makes sense.

Dig Deeper on Primary storage devices

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close