Storage networking alternatives

All the old standards -- FC, iSCSI and NAS -- are still going strong, but FCoE and virtualized I/O are waiting in the wings to help remake our storage networks.

This article can also be found in the Premium Editorial Download: IT in Europe: Finding a home for Flash:

All the old standards -- FC, iSCSI and NAS -- are still going strong, but FCoE and virtualized I/O are waiting in the wings to help remake our storage networks.

Storage networking rarely gets much attention, and it’s frequently overshadowed by the server and storage gear it links together. But there’s renewed interest in storage networking as new or enhanced technologies begin to show up in our data centers. Sure, there’s lots to talk about with new server technologies, virtualization, operating systems and apps, but all those technologies ultimately require a place to store their data, so they rely on storage networking technologies to handle the task.

There’s a wide variety of storage networking technologies, with something to fit every budget and storage requirement. Storage networking technologies continue to advance to meet today’s growing requirements and to anticipate future needs. Some of these techs are proven and being deployed now or in the near-term. Others are relatively new or not yet very well understood, so their future isn’t as clear.

The broad range of storage networks

Storage networking includes direct-attached storage (DAS), network-attached storage (NAS) and storage-area networks (SANs). We’ll look at some of the interface technologies used in storage networking, including the familiar lineup of Fibre Channel (FC), iSCSI and serial-attached SCSI (SAS), and some of the newer or less widely used interfaces such as Fibre Channel over Ethernet (FCoE). We’ll need to examine file serving interfaces such as Common Internet File System (CIFS) and Network File System (NFS). Finally, we’ll explore some I/O virtualization technologies that have some interesting possibilities.

There’s often debate about which storage networking interface is the most popular, with predictions of obsolescence for some storage networking interfaces. After checking research firm IDC’s data tracking storage shipments by host interface type, we find that DAS, FC storage, iSCSI storage and NAS are each multibillion dollar businesses and none of them is going away anytime soon. Furthermore, each one is projected to climb significantly in capacity shipped over the next few years.

STORAGE NETWORKING LINGO
STORAGE NETWORKING LINGO
Enlarge STORAGE NETWORKING LINGO diagram.

Direct-attached storage

DAS is the most common and best-known type of storage. In a DAS implementation, the host computer has a private connection to the storage and almost always has exclusive ownership of that storage. The implementation is relatively simple and can be very low cost. A potential disadvantage is that the distance between the host computer and storage is frequently limited, such as within a computer chassis or rack/adjacent rack.

However, SAS, traditionally known as a DAS type of interface, is beginning to show some storage networking-type capabilities. SAS switches have come to market recently that provide a relatively simple method for sharing storage among a small number of servers while maintaining the low-latency SAS is known for.

Network-attached storage

NAS devices, also known as file servers, share their storage resources with clients on the network in the form of “file shares” or “mount points.” These clients use network file access protocols such as CIFS/Server Message Block (SMB) or NFS to request files from the file server. Because NAS operates on a network (usually TCP/IP over Ethernet), the storage can be physically distant from the clients.

File servers running Windows, or those that need to share storage with Windows clients, use the CIFS/SMB protocol. Microsoft Corp. has been enhancing this protocol for several years. Windows 7 and Windows Server 2008 R2 use SMB Version 2.1, which has a number of performance improvements over previous versions. Another implementation of the CIFS/ SMB protocol is Samba 3.6, which uses SMB Version 2.0; other implementations of CIFS/SMB use SMB Version 1.0.

File servers running Unix or Linux natively support NFS. There are three major versions of NFS: NFSv2, NFSv3 and NFSv4. NFSv3 seems to be the most commonly deployed version, and it’s adequate for many applications and environments. NFSv4 added performance and security improvements and became a “stateful” protocol. New features in NFSv4.1 include sessions, directory delegation and “Parallel NFS” (pNFS). pNFS was introduced to support clustered servers that allow parallel access to files across multiple servers.

@pb

iSCSI

iSCSI provides the advantages of SAN storage while using an Ethernet networking infrastructure. iSCSI has tended to be deployed in small- and medium-sized businesses (SMBs) because of its lower initial costs and perceived simplicity, but it can scale up, especially with 10 GbE technology, and is increasingly finding a place in larger enterprises.

Because iSCSI runs over TCP/IP and Ethernet, it can run on existing Ethernet networks, although it’s recommended that iSCSI traffic be separated from regular LAN traffic. In theory, iSCSI can use any speed of Ethernet; however, the best practice is to use gigabit Ethernet or faster. Over the long-term, iSCSI will be able to use any of the speeds on the Ethernet roadmap, such as 40 Gbps and 100 Gbps.

Virtualized server environments can take advantage of iSCSI storage through the hypervisor or directly access iSCSI storage from the guest virtual machines (VMs), bypassing the hypervisor.

As the adoption rate of 10 GbE technology increases, iSCSI becomes increasingly attractive to organizations as they examine their long-term data center plans. Many of the iSCSI storage systems available today have all the advanced storage features such as replication, thin provisioning, compression, data deduplication and others that are often required by enterprise data centers. For many modern storage systems, iSCSI is available as a host interface along with FC and other interfaces.

Fibre Channel

Fibre Channel has been used as both a device-level disk drive interface and a SAN fabric interface, and has been deployed for approximately 15 years. FC carries the SCSI command protocol and uses either copper or fiber-optic cables with the appropriate connectors. FC speed has doubled approximately every three or four years, with 8 Gbps products becoming available in 2008 for SAN fabric connections and 16 Gbps products just beginning to emerge. All high-end storage subsystems and many midrange products use FC as either the only host interface or one of multiple interfaces.

Fibre Channel is used as a disk drive interface for enterprise-class disk drives, with a maximum interface speed of 4 Gbps to an individual disk drive (the speed of the interface shouldn’t be confused with the transfer rate of an individual disk drive). The industry is moving away from FC as an enterprise-class disk drive interface and shifting to 6 Gbps SAS for enterprise drives, including hard disk drives (HDDs) and solid-state drives (SSDs).

@pb

FC provides excellent performance, availability and scalability in a lossless network that’s isolated from general LAN traffic. Fibre Channel infrastructures are common in large data centers where there are full-time data storage administrators. It’s not uncommon to see FC fabrics with hundreds or thousands of active Fibre Channel SAN ports.

Some 16 Gbps FC SAN fabric products will become available in late 2011. Use cases for 16 Gbps FC include large virtualized servers, server consolidations and multi-server applications. The increasing acceptance of SSDs for enterprise workloads will also help consume some of the increased bandwidth that 16 Gbps FC brings. In addition, storage vendors are already working on a 32 Gbps FC SAN interface that’s expected to appear in products in three or four years.

Fibre Channel over Ethernet

Fibre Channel over Ethernet is a new interface that encapsulates the FC protocol within Ethernet packets using a relatively new technology called Data Center Bridging (DCB). DCB is a set of enhancements to traditional Ethernet and is currently implemented with some 10 GbE infrastructures. FCoE allows FC traffic to run over a lossless 10 Gbps link while maintaining compatibility with existing Fibre Channel storage systems.

FCoE introduces a new type of switch and a new type of adapter. Ethernet switches capable of supporting FCoE require DCB and the new host adapters are known as converged network adapters (CNAs) because they can run Ethernet and FC (via FCoE) at the same time. Some of the CNAs have full hardware offload for FCoE, iSCSI or both, in the same way that Fibre Channel host bus adapters (HBAs) have hardware offload for Fibre Channel. DCB switches are capable of separately managing different traffic types over the same connection, and can allocate percentages of the total bandwidth to those differing traffic types. By combining the previously separate Ethernet and Fibre Channel switches, adapters and cables, the long-term costs of storage and data networking can be reduced.

As enterprises plan new data centers, or new server and storage infrastructure, FCoE and DCB technology should be carefully examined. They offer the potential for increased performance, a reduction in the number of adapters needed and a commensurate reduction in electric power consumption while working with existing Fibre Channel infrastructure.

@pb

I/O virtualization

I/O virtualization is about virtualizing the I/O path between a server and a storage device, and is therefore complementary to server virtualization. When we virtualize, we decouple the logical presentation of a device from the physical device itself to use the resources more effectively or to share expensive resources. This can be done by splitting the device into smaller logical units, combining devices into larger units or by representing the devices as multiple devices. This concept can apply to anything that uses an adapter in a server, such as a network interface card (NIC), RAID controller, FC HBA, graphics card and PCI Express (PCIe)-based solid-state storage. For example, NIC teaming is one way of combining devices into a single, “larger” device. Virtual NICs are a way to represent multiple devices from a single device.

A pair of related technologies known as Single Root I/O Virtualization (SR-IOV) and Multi-Root I/O Virtualization (MR-IOV) are beginning to be implemented. SR-IOV is closer to becoming a reality than MR-IOV, but both provide some interesting benefits. These technologies work with server virtualization and allow multiple operating systems to natively share PCIe devices. SR-IOV is designed for multiple guest operating systems within a single virtual server environment to share devices, while MR-IOV is designed for multiple physical servers (which may have guest virtual machines) to share devices.

When an SR-IOV-capable adapter is placed in a virtual server environment and the hypervisor supports SR-IOV, then the functions required to create and manage virtual adapters in the virtual machine environment are offloaded from the hypervisor into the adapter itself, saving CPU cycles on the host platform and improving performance to nearly that of a physical server implementation. Many Ethernet adapters, FC HBAs and some RAID controllers are SR-IOV capable today.

MR-IOV takes I/O virtualization a step further and extends this capability across multiple physical servers. This is accomplished by extending the PCIe bus into a chassis external to the servers, possibly at the top of the rack; all the servers in the rack would then connect to this PCIe chassis using a relatively simple PCIe bus extender adapter. Network, graphics or other adapters, especially expensive adapters, can then be placed into the external chassis to allow sharing of the adapters by multiple servers.

An interesting application of this type of technology would be to use SR-IOV- or MR-IOV-capable RAID controllers or SAS/Serial Advanced Technology Attachment (SATA) adapters for moving guest VMs without the need for a SAN. Also, imagine an SR-IOV-capable NIC that could service requests for connections between guest virtual machines that were in the same physical server, eliminating the need for an external switch.

The long pole in this tent is getting support from the hypervisor vendors. As of this writing, only Red Hat Enterprise Linux 6 supports SR-IOV for a limited set of NICs. Microsoft has been rather tight-lipped about features in the next version of Windows, but it wouldn’t be too surprising to see some SR-IOV support in the next version of Windows Hyper-V. It’s not known at this time if SR-IOV support will show up in VMware products anytime soon.

BIO: Dennis Martin has been working in the IT industry since 1980, and is the founder and president of Demartek, a computer industry analyst organization and testing lab.

This was first published in October 2011

Dig deeper on NAS management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close