The state of network storage technologies

While often overlooked, there's a lot happening with network storage technologies to keep up with the ever-increasing I/O demands coming from virtualized servers and storage.

This article can also be found in the Premium Editorial Download: Storage magazine: SearchStorage's best data storage products of 2012:

While often overlooked, there's a lot happening with network storage technologies to keep up with the ever-increasing I/O demands coming from virtualized servers and storage.

Storage networks, much like their data networking kin, tend to evolve slowly, with enterprises approaching tech refreshes cautiously and incrementally. But the IT computing landscape is undergoing profound change in response to new demands and the new technologies designed to address those demands.

Adapter types and requirements

 

The sheer number of applications a typical data center hosts and the amount of data these applications churn through directly stress storage networks. The unprecedented volume of data being generated today due to the proliferation of devices such as smartphones, surveillance cameras, radio-frequency identification (RFID) tags and countless other devices with sensors places new demands on storage systems and the storage networking technologies that link them to servers and other client devices.

New techs stress storage networks

Among the technologies being employed to help address application and data growth are server virtualization, solid-state storage technologies and a new generation of servers. Although very different, these technologies share a common characteristic: they demand I/O and flexible configurations that many storage fabrics simply can't provide.

  • Server virtualization. Server virtualization is solidly entrenched in today's IT environment, and the number of virtual machines (VMs) deployed per physical host is growing. Not long ago, five to 10 VMs per host was typical, often based on the number of processor cores and amount of memory available in the physical server. Recently, it has become more common for a single physical server to host 15, 20 or 25 VMs. As the density of virtual machine deployments increases, so does the I/O on the storage network.
  • Solid-state storage. Solid-state storage provides a tremendous boost in data storage performance and, for the first time, our lab tests are beginning to show that storage devices are no longer the data center bottleneck in many cases. That's the good news; the bad news is that the bottleneck is shifting to the storage network.
  • PCI Express (PCIe) 3.0. The latest generation of data center servers includes the PCIe 3.0 peripheral interface. PCIe 3.0 supports twice the bus speed of the previous PCIe generation, and these newer servers support up to double the total number of lanes of PCIe per processor, resulting in a quadrupling of the total I/O bandwidth available in a single server. These new servers produce network-taxing I/O, but have enough horsepower to offer 10 Gigabit Ethernet (10 GbE) on the motherboard, which is a step toward wider adoption and more affordable prices for 10 GbE.

Next-generation storage interfaces

The networking industry is responding to these new demands, offering enhancements to existing networking products and protocols, as well as more innovative responses to growing I/O issues. Not only do we have higher speeds available for all familiar storage interfaces, including Ethernet, Fibre Channel (FC) and others, but we also have ways to virtualize the I/O path that are particularly complementary to server virtualization.

Ethernet. Ethernet is widely used for both data and storage networking. Ethernet provides a good transport for file storage protocols such as Network File System (NFS) and Server Message Block (SMB, formerly known as CIFS), and can also be used for block storage protocols such as iSCSI and Fibre Channel over Ethernet (FCoE).

The 10 GbE specification was ratified in 2002, yet a decade later use of 10 GbE is just beginning to pick up and it soon will become the dominant Ethernet connection interface. According to the most recent Storage magazine/SearchStorage.com Storage Purchasing survey, 28% of respondents have implemented 10 GbE for their storage networks (versus 30% with 1 GbE); two years ago, only 13% had deployed 10 GbE. Early uses of 10 GbE were limited to trunking between switches and the components were expensive. The 10GBASE-T specification, ratified in 2006, described the familiar RJ45-style Ethernet and with it the promise of a lower price-per-port for 10 GbE.

Increasing adoption of 10 GbE might also be attributed to blade servers, which typically have 10 GbE interfaces in the blade chassis, and the declining prices of 10 GbE components.

There are two different connector types used with 10 GbE: SFP+ and RJ45. The SFP+ connector technology has been around for several years and is the same technology that's used with 8 Gbps and 16 Gbps Fibre Channel connections, although with different line rates. 10 GbE SFP+ is available with either copper or fiber-optic cables. The copper cables, known as Direct Attach Copper (DAC), have the transceivers mounted directly on the cable, and are good for short distances such as within a rack or to a nearby rack. The fiber-optic cables generally require the transceiver to be mounted into the cage in the switch or adapter port. The fiber-optic cables are used for short and moderate distances.

The RJ45 connectors are the familiar connectors used on Cat5, Cat5e and Cat6 Ethernet cables. 10GBASE-T cables should be Cat6a or Cat7 to use the full supported distance of 100 meters. Cat6 cables can be used with 10GBASE-T environments up to 55 meters. Cat5e cables aren't recommended for 10 GbE.

The 10 GbE server adapters support SFP+ or RJ45 connectors, but not both on the same adapter. 10 GbE switches support either SFP+ or RJ45, although some support both in the same switch.

Even as 10 GbE products are beginning to proliferate, 40 GbE and 100 GbE specifications were ratified in June 2010. These technologies are available in products today, but they're expensive and primarily used for switch-to-switch trunking or aggregation. These technologies use multiple lanes of 10 GbE to achieve the aggregate speeds: 40 GbE uses four lanes running at 10 Gbps (4x10) and 100 GbE uses a 10x10 aggregation.

Fibre Channel. The Fibre Channel Industry Association called 2012 the year of "10-10-10," with 10 million FC switch and adapter ports already shipped, $10 billion invested in FC technology and 10 exabytes (EB) of FC storage shipped. Fibre Channel is still the dominant high-end storage networking architecture, satisfying enterprise workloads, server virtualization and cloud architectures. The technology is known for its reliability and high performance.

Speeds for Fibre Channel have been doubling every three or four years since 1997, when the first 1 Gbps FC components became available. The current top speed is 16 Gbps FC, first introduced in switches and adapters in 2011. As a SAN interface, FC is very much alive and well, and work is already underway on development of 32 Gbps FC. These days, FC is rarely used on the back end -- as a disk drive interface for enterprise disk drives -- because drive manufacturers have moved to SAS for that interface.

FC maintains backward compatibility with at least two previous generations, so 16 Gbps FC works with 4 Gbps and 8 Gbps FC gear. Current 16 Gbps FC SAN switches are also backward compatible with 2 Gbps FC. That means a company can upgrade adapters, switches and storage systems independently, without having to upgrade the entire FC SAN infrastructure at one time.

Hardware isn't the only issue. VMware vSphere 5.1 and Windows Server 2012 (and Hyper-V) support and have specific knowledge of 16 Gbps FC and, in some cases, have in-box drivers for some 16 Gbps FC components. With these hypervisors both supporting 64 virtual CPUs and 1 TB of RAM per virtual machine, it's not hard to imagine a virtual server environment that can easily take advantage of the increased I/O bandwidth. Windows Server 2012 with Hyper-V also supports Virtual Fibre Channel. For the FC host bus adapters (HBAs) that support N_Port ID Virtualization (NPIV), this allows a guest VM to access a virtual FC HBA directly, giving the virtual machine the same Fibre Channel support and access as a physical machine.

New speeds for Ethernet and Fibre Channel

Just when you've obtained the latest and greatest, you can expect the computer industry to take another step ahead. To help you plan for the future, here's what you can expect in the not-too-distant future.

Ethernet. We've seen the first 40 Gigabit Ethernet (GbE) host adapter cards become available, and more are expected in 2013. The next server refresh cycle, probably happening in the second half of 2013 or early 2014, will trigger yet another wave of new I/O capabilities,
including more 40 GbE host adapters.

Fibre Channel. The 32 Gbps Fibre Channel (FC) specification is expected to be stabilized in the first half of 2013. Once it's stabilized, products are expected within a year or two. Figure on seeing 32 Gbps FC products in late 2014 or early 2015.

Find more information on roadmaps for these and other storage interfaces on the Demartek Storage Networking Interface Comparison page.

I/O virtualization

Virtualizing the I/O path allows physical devices such as network or storage adapters to be allocated as multiple logical devices; conversely, it also makes it possible to combine physical devices into larger logical devices. In the Fibre Channel example above, we can see that virtual Fibre Channel allows us to take advantage of NPIV, providing multiple virtual FC HBAs to guests in a virtual machine environment.

Single-root I/O virtualization is another way to share I/O adapters such as Ethernet network interface cards (NICs). Not only can these NICs be shared among several guests in a virtual machine environment, the management of this sharing can be offloaded into the card, freeing up host CPU cycles. This is now supported for Ethernet NICs in most hypervisors.

Software-defined networks

If you're running a hypervisor, you already have one form of a software-defined network (SDN). The hypervisors can use the Ethernet NICs to make a virtual network switch within the adapter, providing network switching functions entirely within a server. For network activity that stays within the guests running on one physical server, this becomes a form of a software-defined network.

Other forms of SDN that have emerged recently separate the control functions from the hardware itself. These are especially appealing with cloud providers who have to pool resources across multi-tenant environments.

The Open Networking Foundation is leading an industry effort to move SDN forward. However, SDN is still very new with much yet to be developed. It looks like an interesting effort and promises to provide several benefits, including improved automation and management of networks, and the opportunity to increase the speed of innovation.

About the author:
Dennis Martin has been working in the IT industry since 1980, and is the founder and president of Demartek, a computer industry analyst organization and testing lab.

 

This was first published in February 2013

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close