With the release of its 16 Gbps MDS 9706 Fibre Channel director and MDS 9148S switch last week, Cisco showed it...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
remains invested in storage networking, 12 years after entering the market.
We spoke with Rajeev Bhardwaj, vice president of product management for Cisco's data center switching, about what role the networking giant sees storage playing in today's evolving data center. Topics include Cisco's commitment to Fibre Channel (FC) storage beyond 16 Gbps, Fibre Channel over Ethernet (FCoE) adoption and why Cisco doesn't call 16-gig FC "Gen 5."
How does Cisco look at storage networking today?
Bhardwaj: We are entering an area of massive data growth. From now until 2020, the amount of data we generate will increase by 10 times. So, the obvious question is where is data coming from? If you look at the Internet of Things, big data, devices such as tablets and smartphones, clothing, fridges, cars -- any device that has a sensor can connect to the Internet and generate data. Some estimates say up to 32 billion devices will be connected and translating data.
Mobility, cloud, big data, social -- all of that is creating this unprecedented demand for data. It's a good time to be in the storage industry. From Cisco's perspective, the underlying SAN architecture has to evolve to support this massive data growth.
Where are we in that SAN evolution?
Bhardwaj: If you look at SAN architectures, there is the existing architecture -- the enterprise apps, online transaction processing, database applications and so on. This is like your centralized network storage, block and file connected by FC or Ethernet to your back-end storage. This is traditionally what most of our enterprise customers deploy for their data center applications. That architecture is here to stay and we have to support that.
But, as we go forward, we need to support unstructured data. We see big data and we see scale-out NAS. Think of them as compute nodes embedded by storage in high-performance architecture.
The third architecture we see is object storage, powered by the cloud. This is storage for mobile devices, backup [and] archiving. [It also has] the ability to store pictures.
So, we see three distinct types of SAN architectures that have to be involved. The first one is the enterprise architecture, and that is primarily FC. The second one is big data/scale-out NAS and third is object storage.
What do these architectures mean for the SAN administrator?
Bhardwaj: At Cisco, we believe the fabric, or the SAN, is the common element in all three architectures. We need three key attributes in this evolution. First, most customers are looking for multiprotocol flexibility. There is FC, there is a need to do file, and there is a need to do object. Multiprotocol flexibility becomes extremely important because the data center where the storage is going to hit will have a diversity of data, and we will need an architecture that supports all three protocols.
From a networking protocol standpoint, aren't file storage and object the same? They're both primarily Ethernet.
Bhardwaj: There really is no difference between file and object from a network protocol perspective. The only difference is customers typically deploy file storage locally -- it's in the data center. Object, on the other hand, is more remote. I could have my mobile device [and] my tablet talking object into the cloud. But they both run over the IP protocol.
If everything in most data centers is going to hit the cloud, data centers become mission-critical by definition. So performance and availability become extremely important as we look to combine the LAN and the SAN. No single point of failure is a given, so availability becomes a unique attribute. For performance, we go from 10-gig to 40-gig to 100-gig on the Ethernet side, and from 8-gig to 16-gig to, in the future, 32-gig on the FC side.
The third aspect is scale. If you go back to 2002, a physical server would connect to a SAN switch and your scale was dictated by the number of physical servers connected to the switch. Now, increasingly, we see embedded blade switches and virtual machines, so what ends up happening is [that] we see a lot more devices coming into the infrastructure. The scale becomes important not only from a physical standpoint, but also as a logical scale.
And, because of massive amounts of data and because IT budgets are compressed, we need operational simplification in terms of managing the infrastructure and [ensuring] we have automation so customers can rapidly provision storage.
How does this change the way people buy and deploy storage connectivity?
Bhardwaj: Customers are looking for agility. They want solutions that are pre-validated [and] pre-tested. How do we take compute, network and storage, and combine it together for pre-validated, pre-tested solutions? Examples would be Vblock partnerships with EMC and VMware, and FlexPod partnerships with NetApp and VMware. This is where we've taken our SAN architecture and integrated that into the overall data center architecture.
Cisco's only FC storage-switching rival, Brocade, has talked about having 32-gig FC switches by the end of 2015. Does Cisco have a timetable on when it expects to deliver 32-gig FC?
Bhardwaj: We are monitoring the market. We are in discussions with customers and based on market needs, we'll look at 32-gig as the market evolves.
Brocade and the HBA vendors call their 16-gig FC storage switching Gen 5, and refer to 32-gig as Gen 6. Why hasn't Cisco adopted that terminology?
Bhardwaj: We don't want to confuse the market. Our FC customers have been through the transition from 1-gig to 2-gig, to 4-gig, to 8-gig and now 16-gig. It's easier if we don't change that. We call it 16-gig FC, and the next rev will be 32-gig FC.
A few years back, Cisco was pushing FCoE as a converged protocol for storage networks. Has the FCoE market played out the way you expected?
Bhardwaj: Within our UCS blade systems, FCoE is the default protocol for block storage on top of rack switches. Thirty percent of our Nexus switches have FCoE licenses. We have now refreshed our FCoE capability on the Nexus 7000 modular unified platform, and we've added FCoE on our MDS director platform. From Cisco, you now see end-to-end infrastructure that has FCoE from server access, all the way to the network and back to storage.
Some of our largest customers have deployed end-to-end FCoE. We want to give customers flexibility and choice. We will continue to have a premier FC platform for customers who are happy with it and want to continue to deploy FC storage. There is still a huge, installed base of FC. Customers are looking for both -- they want FCoE, but at the same time, they want bridging capabilities. So, even if my servers are FCoE, my back-end storage is FC.
How much is solid-state storage driving FC storage adoption?
Bhardwaj: We're seeing an increased deployment of FC with flash. They go hand in hand. One of the key benefits of flash is high performance. Typically, flash gets deployed with Tier-1 applications where performance is extremely important.
Cisco is a key player in software-defined networking, but where do you fit with software-defined storage?
Bhardwaj: It's part of how we have to provide operational simplification. For instance, we have hooks built into OpenStack to help customers provision their storage switching in cloud-based solutions. Another thing we've done is integrated with EMC's ViPR. If customers are deploying ViPR for software-defined storage, they need the right set of hooks to configure and manage MDS switches. That piece of work is already there today, and we're doing the same work with IBM as well.
Should you consider FCoE for your networking?
Tips for choosing a storage protocol
FC storage remains protocol of choice