Problem solve Get help with specific problems with your technologies, process and projects.

Storage networking technology steps up to performance challenge

A rapidly growing number of storage protocols and interfaces are helping storage networking technology avoid data center bottlenecks.

This article can also be found in the Premium Editorial Download: Storage magazine: Storage in a container: Dealing with Docker storage:

Storage networking technology is changing, and speed is the name of the game.

Fast flash storage and the growing use of virtualization and applications with larger amounts of data aren't the only technologies putting pressure on networks that carry storage traffic like never before. Databases such as IBM DB2, MySQL, Oracle and SQL Server, for instance, can always use faster connections with lower latency, while increasingly popular big data applications hold huge amounts of information that need to be moved. And while video origination at 4K resolution is already established, starting this year the requirements for video post-production in native 4K resolution have been set by companies such as Amazon, Netflix and others, furthering the demand for higher storage network bandwidth.

These are just a few examples of the challenges facing managers of today's storage fabrics. To help keep storage traffic from becoming the bottleneck in your data center, be it large or small, we present a rundown of the key improvements to storage networking and interface technologies available in 2016.

Widening the Ethernet lane

Nearly everybody uses Ethernet for connectivity between desktops, workstations, application servers and file servers. While many of us with wired connections to our desktops use 1 GbE, 10 GbE is the backbone for our data center connections, with 40 GbE technology leveraged in certain pockets of the enterprise. A fair amount of the traffic traversing these networks can be considered storage traffic, especially with respect to file servers.

As flash storage begins to proliferate, we are finding that even 10 GbE can become a bottleneck. To alleviate this, the Ethernet industry is making a significant performance jump. Until now, the fastest speed per lane for Ethernet has been 10 Gbps. Faster Ethernet such as 40 GbE and 100 GbE bundle multiple lanes of 10 Gbps connections into one connection: 4 x 10 for 40 GbE and 10 x 10 for 100 GbE.

Announced two years ago, Ethernet running at 25 Gbps per lane is now available. This means that a single lane of Ethernet connectivity runs 2.5 times faster than legacy 10 GbE. There are also options so that 50 GbE and 100 GbE can be achieved by bundling two and four lanes, respectively. While considerably faster than 10 GbE, the good news is that 25 GbE technology can generally use the same types of fiber-optic or copper cables as 10 GbE (with the exception of some cable lengths and transceiver differences).

The 25 GbE technology uses the same underlying SFP28 technology as 32 Gbps Fibre Channel as well (see next section), but runs at a slightly different speed, which is one reason why both of these technologies are coming to market this year.

Fiber optic cables: Time to switch from orange to aqua

If you haven't already moved to the aqua-colored OM3 or OM4 multi-mode fiber optic cables for Ethernet or Fibre Channel, now is the time. You should no longer purchase the orange-colored OM1 or OM2 cables. The OM3 and OM4 cables provide sufficient distance for these latest storage networking speeds within a data center and for planned future speed upgrades that won't be supported by OM1 or OM2 cables. For more details about cable distances for various speeds of Ethernet and Fibre Channel, see the Demartek's Storage Networking Interface Comparison reference page.

Those planning new data center buildouts should be familiar with the Ethernet Alliance roadmap. This roadmap provides a good idea of the new speeds coming along, the approximate time frames for these, details on the physical connectors for copper and fiber optic cables, and a good discussion of the entire Ethernet ecosystem from residential to high-end data center (also see: Ethernet Speed Roadmap).

Ethernet speed roadmap

Fibre Channel neither down, nor out

Popular in data centers for its reliability and stability, Fibre Channel (FC) dominates high-end storage networking technology. According to some industry estimates, 90% of high-end data centers have deployed FC technology. And although there has been some discussion about the decline this high-speed storage networking technology, recent analyst reports suggest that the FC market actually grew in late 2015 and early 2016.

Fibre Channel performance has doubled in speed approximately every three to five years since 1997. Gen 6 Fibre Channel became available this year, and includes a single-lane speed of 32 Gbps and a quad-lane speed of 128 Gbps (4 x 32). This generation of FC also includes new management and diagnostic features.

As with previous generations, Gen 6 FC is backward-compatible with the two previous generations (16 Gbps FC and 8 Gbps FC), making the transition to the new technology a relatively smooth process for enterprises.

The Fibre Channel Industry Association (FCIA) offers a public roadmap that provides information on new speeds, guidance in cable and connector selection and more (also see Fibre Channel Roadmap).

Fibre Channel roadmap

Catch the NVM Express

NVM Express (NVMe) is an optimized, high-performance scalable host controller interface designed for enterprise and client solid-state storage that uses the local PCI Express bus. More recently, NVMe has been extended over distance with the new NVMe over Fabrics specification. NVMe over Fabrics can use a remote direct memory access (RDMA) fabric or a Fibre Channel fabric and works with future fabric technologies.

NVMe is designed to streamline the I/O access to storage devices and storage systems built with non-volatile memory -- from today's NAND flash technology to future higher-performing and persistent memory technologies. NVMe's streamlined command set typically uses less than half the number of CPU instructions to process an I/O request than other storage protocols.

Internally, NVMe is designed differently than other storage protocols. It supports 64K commands per queue and up to 64K queues. These queues are designed such that I/O commands and responses to those commands operate on the same processor core and can take advantage of the parallel processing capabilities of multicore processors. Each application or thread can have its own independent queue, so no I/O locking is required.

The NVMe protocol can be used in devices ranging from mobile phones to enterprise storage systems. NVMe devices in enterprise environments, typically running at full power, provide performance up to the full bandwidth of the number of PCIe lanes that each device uses. In consumer devices operating at low-power levels, NVMe devices provide lower performance.

At the device level, you can use NVMe for add-in-cards that plug into PCIe slots, the traditional drive form factor (2.5-inch is the most popular), and the M.2 small form factor card. Because of these and other features, we've found that -- by running tests in our lab -- NVMe delivers considerably higher performance and lower latency than other storage protocols.

Revving up Serial Attached SCSI

SAS, or Serial Attached SCSI, is an enterprise storage networking technology interface and protocol that's used in some fashion in nearly every enterprise storage product today. SAS, and its predecessor SCSI, have a long history of versatility, reliability and scalability for a device-level interface, as a shelf-to-shelf disk interface, and as a host interface to external storage platforms. In addition to HDDs and SSDs, SAS products include host bus adapters, RAID controllers, expanders and other components used in storage. There are also SAS switches used in SAS fabric implementations.

Currently shipping SAS products run at 12 Gbps, and some older 6 Gbps products are still available. The roadmap for SAS doubles the speed to 24 Gbps, with those products expected to come to market with server platforms, which should also support PCIe 4.0, scheduled for release in 2019.

The 24 Gbps SAS is backward compatible with the two previous SAS generations (12 Gbps and 6 Gbps) and with 6 Gbps SATA. For more on the future of SAS, see the SCSI Trade Association's SAS roadmap (also see: Serial Attached SCSI (SAS) Roadmap.)

Serial attached SCSI roadmap

Serial ATA in limbo

SATA, or Serial ATA, has been used for many years to connect a computer to a single storage device such as a HDD, SSD or optical device (CD-ROM, DVD and so on). The current SATA interface runs at 6 Gbps and there is no roadmap for a faster speed, though there is ongoing work to add enterprise features. There was some activity for "SATA Express" running at higher speeds, but this activity appears to have stopped.

SATA is used in the traditional drive form factor, but is also available in a much smaller M.2 card form factor.

Standalone workstation interfaces

There are two additional interfaces that have an effect on storage performance worth discussing: Thunderbolt and USB.

Thunderbolt

Thunderbolt is a high-speed interface designed primarily for small configurations (up to six devices) and for use in video and similar media creation. The devices that can use Thunderbolt tend to be premium laptop computers, desktop computers and workstations, along with video cameras and storage devices. The first version of Thunderbolt was introduced in 2011 running at 10 Gbps. This speed doubled to 20 Gbps in late 2013 and then in mid-2015 doubled again to 40 Gbps when used with active copper or fiber-optic cables.

Thunderbolt 3, the latest version, uses the new USB Type-C cables, is compatible with USB 3.1 (see below) and DisplayPort 1.2, and can be used to transmit up to 100 watts of power. Thunderbolt 3 enjoys compatibility with a wide range of devices, and is available in a growing list of motherboards and consumer and business laptops.

USB

Most of us are familiar with USB flash drives, and probably have several of these in our desks and tool bags. USB has come a long way since the original 1.5 Mbps version introduced in 1997.

In spite of some slightly confusing name changes to the most recent specifications, USB technology continues to advance. SuperSpeed USB is the original name for 5 Gbps USB (originally known as USB 3.0, but now known as USB 3.1 Gen 1). SuperSpeed USB 10 Gbps (USB 3.1 Gen 2) devices are expected to come to market this year. And USB 3.1 devices are backward compatible with widely used USB 2.0 devices.

USB Power Delivery can deliver up to 100 watts of power bi-directionally over a USB cable while delivering audio, video and data at the same time. Expect to see some interesting applications for this interface technology this year.

SATA, SAS and NVMe device compatibility

In my flash storage article in the June 2016 edition of Storage magazine (see "Flashy Servers -- the lowdown on server-side, solid-state storage"), I provided a diagram of SATA, SAS and PCIe/NVMe device connectors, showing the areas of compatibility between these three interfaces.

For compatibility among SATA, SAS and PCIe/NVMe device connectors, think of them as a three-level hierarchy. A lower device can be placed in a higher device backplane, but higher devices cannot be placed into a lower-level backplane. SATA devices, at the lowest level of this hierarchy, can be placed into SATA, SAS and PCIe/NVMe device backplanes. SAS devices, at the middle level of this hierarchy, can fit into SAS and PCIe/NVMe device backplanes, but not SATA device backplanes. And NVMe devices in the drive form factor can only be placed into PCIe/NVMe backplanes.

As you move up the hierarchy of storage networking technology, additional features and performance become available. For more on how the protocols covered in this article match up to different types of storage and enterprise use cases, see: The real world.

Storage protocols and suggested use cases

Next Steps

When should you upgrade your FibreChannel technology?

Stay on top of storage networking infrastructure advances

FibreChannel technology leads the way

This was last published in July 2016

Dig Deeper on Fibre Channel (FC) SAN

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How can storage networking technology avoid bottlenecks in your data center?
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close