Cisco VP: We're still into FC storage (but FCoE is doing fine)

Cisco VP dishes on the data center switching vendor's plans for 32 Gbps FC, how Fibre Channel over Ethernet is a success and why it doesn't call 16-gig FC "Gen 5."

With the release of its 16 Gbps MDS 9706 Fibre Channel director and MDS 9148S switch last week, Cisco showed it...

remains invested in storage networking, 12 years after entering the market.

We spoke with Rajeev Bhardwaj, vice president of product management for Cisco's data center switching, about what role the networking giant sees storage playing in today's evolving data center. Topics include Cisco's commitment to Fibre Channel (FC) storage beyond 16 Gbps, Fibre Channel over Ethernet (FCoE) adoption and why Cisco doesn't call 16-gig FC "Gen 5."

How does Cisco look at storage networking today?

Bhardwaj: We are entering an area of massive data growth. From now until 2020, the amount of data we generate will increase by 10 times. So, the obvious question is where is data coming from? If you look at the Internet of Things, big data, devices such as tablets and smartphones, clothing, fridges, cars -- any device that has a sensor can connect to the Internet and generate data. Some estimates say up to 32 billion devices will be connected and translating data.

Mobility, cloud, big data, social -- all of that is creating this unprecedented demand for data. It's a good time to be in the storage industry. From Cisco's perspective, the underlying SAN architecture has to evolve to support this massive data growth.

Where are we in that SAN evolution?

Bhardwaj: If you look at SAN architectures, there is the existing architecture -- the enterprise apps, online transaction processing, database applications and so on. This is like your centralized network storage, block and file connected by FC or Ethernet to your back-end storage. This is traditionally what most of our enterprise customers deploy for their data center applications. That architecture is here to stay and we have to support that.

But, as we go forward, we need to support unstructured data. We see big data and we see scale-out NAS. Think of them as compute nodes embedded by storage in high-performance architecture.

The third architecture we see is object storage, powered by the cloud. This is storage for mobile devices, backup [and] archiving. [It also has] the ability to store pictures.

So, we see three distinct types of SAN architectures that have to be involved. The first one is the enterprise architecture, and that is primarily FC. The second one is big data/scale-out NAS and third is object storage.

What do these architectures mean for the SAN administrator?

Bhardwaj: At Cisco, we believe the fabric, or the SAN, is the common element in all three architectures. We need three key attributes in this evolution. First, most customers are looking for multiprotocol flexibility. There is FC, there is a need to do file, and there is a need to do object. Multiprotocol flexibility becomes extremely important because the data center where the storage is going to hit will have a diversity of data, and we will need an architecture that supports all three protocols.

From a networking protocol standpoint, aren't file storage and object the same? They're both primarily Ethernet.

Bhardwaj: There really is no difference between file and object from a network protocol perspective. The only difference is customers typically deploy file storage locally -- it's in the data center. Object, on the other hand, is more remote. I could have my mobile device [and] my tablet talking object into the cloud. But they both run over the IP protocol.

If everything in most data centers is going to hit the cloud, data centers become mission-critical by definition. So performance and availability become extremely important as we look to combine the LAN and the SAN. No single point of failure is a given, so availability becomes a unique attribute. For performance, we go from 10-gig to 40-gig to 100-gig on the Ethernet side, and from 8-gig to 16-gig to, in the future, 32-gig on the FC side.

The third aspect is scale. If you go back to 2002, a physical server would connect to a SAN switch and your scale was dictated by the number of physical servers connected to the switch. Now, increasingly, we see embedded blade switches and virtual machines, so what ends up happening is [that] we see a lot more devices coming into the infrastructure. The scale becomes important not only from a physical standpoint, but also as a logical scale.

And, because of massive amounts of data and because IT budgets are compressed, we need operational simplification in terms of managing the infrastructure and [ensuring] we have automation so customers can rapidly provision storage.

How does this change the way people buy and deploy storage connectivity?

Bhardwaj: Customers are looking for agility. They want solutions that are pre-validated [and] pre-tested. How do we take compute, network and storage, and combine it together for pre-validated, pre-tested solutions? Examples would be Vblock partnerships with EMC and VMware, and FlexPod partnerships with NetApp and VMware. This is where we've taken our SAN architecture and integrated that into the overall data center architecture.

Cisco's only FC storage-switching rival, Brocade, has talked about having 32-gig FC switches by the end of 2015. Does Cisco have a timetable on when it expects to deliver 32-gig FC?

Bhardwaj: We are monitoring the market. We are in discussions with customers and based on market needs, we'll look at 32-gig as the market evolves.

Brocade and the HBA vendors call their 16-gig FC storage switching Gen 5, and refer to 32-gig as Gen 6. Why hasn't Cisco adopted that terminology?

Bhardwaj: We don't want to confuse the market. Our FC customers have been through the transition from 1-gig to 2-gig, to 4-gig, to 8-gig and now 16-gig. It's easier if we don't change that. We call it 16-gig FC, and the next rev will be 32-gig FC.

A few years back, Cisco was pushing FCoE as a converged protocol for storage networks. Has the FCoE market played out the way you expected?

Bhardwaj: Within our UCS blade systems, FCoE is the default protocol for block storage on top of rack switches. Thirty percent of our Nexus switches have FCoE licenses. We have now refreshed our FCoE capability on the Nexus 7000 modular unified platform, and we've added FCoE on our MDS director platform. From Cisco, you now see end-to-end infrastructure that has FCoE from server access, all the way to the network and back to storage.

Some of our largest customers have deployed end-to-end FCoE. We want to give customers flexibility and choice. We will continue to have a premier FC platform for customers who are happy with it and want to continue to deploy FC storage. There is still a huge, installed base of FC. Customers are looking for both -- they want FCoE, but at the same time, they want bridging capabilities. So, even if my servers are FCoE, my back-end storage is FC.

How much is solid-state storage driving FC storage adoption?

Bhardwaj: We're seeing an increased deployment of FC with flash. They go hand in hand. One of the key benefits of flash is high performance. Typically, flash gets deployed with Tier-1 applications where performance is extremely important.

Cisco is a key player in software-defined networking, but where do you fit with software-defined storage?

Bhardwaj: It's part of how we have to provide operational simplification. For instance, we have hooks built into OpenStack to help customers provision their storage switching in cloud-based solutions. Another thing we've done is integrated with EMC's ViPR. If customers are deploying ViPR for software-defined storage, they need the right set of hooks to configure and manage MDS switches. That piece of work is already there today, and we're doing the same work with IBM as well.

Next Steps

Should you consider FCoE for your networking?

Tips for choosing a storage protocol

FC storage remains protocol of choice

Dig Deeper on Fibre Channel (FC) SAN

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

13 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Is FCoE a better networking choice than Fibre Channel?
Cancel
Yes
Cancel
Guests, why do you think it's the better networking choice?
Cancel
yes
Cancel
Have any of your companies actually implemented FCoE? If so--how?
Cancel

Hi Folks,


I have read an article 10 years back that Fibre Channel is dying but did not! FC is well known for connection oriented and trusted as a reliable protocol.


FCoE has got its edge now with the support of 8/10 bit encoding (before it supported onlyt 64/66) but there will be an overhead to do the encapsulation of FC frames to IP packets.


Further discussion can be made once we find more posting on this subject ..

Cancel
Seriously? FCoE is an inferior choice for mission critical storage. While it serves a purpose for top-of-rack server I/O consolidation deployments (majority Cisco UCS), it is dead for end-to-end storage networking deployments. Customers continue to buy Fibre Channel because of reliability, performance, and scalability. FCoE is built on Ethernet which is inherently less reliable. 10GbE is not faster than 16GFC (40GbE for FCoE isn't being deployed). FC has proven scalability with deployments that scale in the tens of thousands of ports. FCoE high port density deployments are far and few between. Customers are voting with their wallets and picking Fibre Channel or iSCSI for their block-based storage, not FCoE. 
Cancel
And BTW, the industry standard name for 32/128 Gb Fibre Channel is Gen 6 (it's not just 32Gb) http://fibrechannel.org/library/2014/02/fibre-channel-industry-association-announces-development-of-industrys-fastest-storage-networking.html

Cancel
It is a protocol that wraps Fibre channel flames In ethernet frames provides the server side  connection for a FCOE network
Cancel

Many Cisco UCS systems connect to SAN. FC vs FCoE, that is the question. 

Cisco UCS be design is CNA vi FCoE to FI, and can continue out FCoE or FC. On the UCS platform the FCoE from CNA to FI is transparent and seems to work well. Does throw off standard SNMP IOD's used for performance but that is no big deal as NetFlow is part os UCSM 2.2x. Will see if that breaks down the CNA traffic through FI. 

So FC vs. FCoE? At around 2.1x Policy Based Zoning came along on UCS. Policy Based Zoning is something Brocade should have figured out long ago. My preference is connecting the FI's to FC SAN via FC (will spare the fine details on this and different options) and using Policy Based Zoning. Brocade fabrics work fine but can not benefit from Policy Based Zoning at the FI (blah). 

FCoE in my opinion (putting on flame suit) is a solution created for a problem that does not exist. Deploying the QoS etc on Nexus 5k or 7k and extra complexity does not serve. Most network engineers would avoid the unneeded complexity and get some MDS switches. It is like "Redistribution" between OSPF and EIGRP, it looks great on paper but in the real world a skillfully placed static route usually works just fine. The extra complexity would make troubleshooting etc a nightmare. Since a SAN Device will have a set number of ports needed for FCoE or FC, don't see the reduced cabling. This is all from the Cisco UCS perspective. 

I linked to this on my blog here: http://realworlducs.com/cisco-vp-were-still-into-fc-storage-but-fcoe-is-doing-fine/

Cancel
cowboycraig,
   if you don't mind an alternate view.

I looked at some plans for some new data center space today.  Up to maybe 75 racks.   IP and storage.
I guarantee you if I could cut in half the number of cable/fiber runs and the number of networking/san devices and it could be with technology that was not "new" ... in a heartbeat ..
Its what we do with UCS and have for many a year with no problems.
Cancel
FCOE is dead beyond the first hop.  Very few storage companies are supporting FCOE as a target and most are discontinuing it due to lack of market adoption.  Also, FCOE's latency characteristics don't even touch FC's when it comes to performance in SSD environments.  
Cancel
that is the promise.  the problem is that it doesn't do it well.  issues like single host network pause in FCOE allow a slow drain device to pause the entire SAN.  That's simply unacceptable for most enterprise environments.

when a protocol is <3% of overall storage attach, commitment to change that glaring issue is not a priority. 
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchConvergedInfrastructure

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close