Magazine

10 quick and easy ways to boost storage performance

Ezine

This article can also be found in the Premium Editorial Download "IT in Europe: Object storage: An elementary approach to file structure."

Download it now to read this article plus other related content.

    Requires Free Membership to View

6. Streamline the server side

Today's multicore servers have CPU power to spare, but network interface cards (NICs) and HBAs have traditionally been locked to a single processor core. Receive-side scaling (RSS) allows these interface cards to distribute processing across multiple cores, accelerating performance.

Hypervisors face another task when it comes to sorting I/O and directing it to the correct virtual machine guest, and this is where Intel Corp.'s virtual machine device queues (VMDq) technology steps in. VMDq allows the Ethernet adapter to communicate with hypervisors like Microsoft Hyper-V and VMware ESX, grouping packets according to the guest virtual machine they're destined for.

Technologies like RSS and VMDq help accelerate I/O traffic in demanding server virtualization applications, delivering amazing levels of performance. By leveraging these technologies, Microsoft and VMware have demonstrated the appropriateness of placing demanding production workloads on virtual machines.

7. Get active multipathing

Setting up multiple paths between servers and storage systems is a traditional approach for high availability, but advanced active implementations can improve performance as well.

Basic multipathing software merely provides for failover, bringing up an alternative path in the event of a loss of connectivity. So-called "dual-active" configurations assign different workloads to each link, improving utilization but restricting each connection to a single path. Some storage arrays support trunking multiple connections together or a full active-active configuration, where links are aggregated and the full potential can be realized.

Modern multipathing frameworks like Microsoft MPIO, Symantec Dynamic Multi Path (DMP) and VMware PSA use storage array-specific plug-ins to enable this sort of active multipathing. Ask your storage vendor if a plug-in is available, but don't be surprised if it costs extra or requires a special enterprise license.

8. Deploy 8 Gbps Fibre Channel

Fibre Channel throughput has continually doubled since the first 1 Gbps FC products appeared, yet backwards compatibility and interoperability have been maintained along the way. Upgrading to 8 Gbps FC is a simple way to accelerate storage I/O, and can be remarkably affordable: today, 8 Gbps FC switches and HBAs are widely available and priced approximately the same as common 4 Gbps parts. As SANs are expanded and new servers and storage arrays are purchased, buying 8 Gbps FC gear instead of 4 Gbps is a no-brainer; and 16 Gbps FC equipment is on the way.

Remember that throughput (usually expressed as megabytes per second) isn't the only metric of data storage performance; latency is just as critical. Often experienced in terms of I/O operations per second (IOPS) or response time (measured in milliseconds or nanoseconds), latency is the speed at which individual I/O requests are processed and has become critical in virtualized server environments. Stacking multiple virtual servers together behind a single I/O interface requires quick processing of packets, not just the ability to stream large amounts of sequential data.

Each doubling of Fibre Channel throughput also halves the amount of time it takes to process an I/O operation. Therefore, 8 Gbps FC isn't just twice as fast in terms of megabytes per second, it can also handle twice as many I/O requests as 4 Gbps, which is a real boon for server virtualization.

9. Employ 10 Gbps Ethernet (10 GbE)

Fibre Channel isn't alone in cranking up its speed. Ethernet performance has recently jumped by a factor of 10, with 10 Gbps Ethernet becoming increasingly common and affordable, but 10 GbE storage array availability lags somewhat behind NICs and switches. Environments using iSCSI or NAS protocols like SMB and NFS can experience massive performance improvements by moving to 10 Gbps Ethernet, provided such a network can be deployed.

An alternative to end-to-end 10 Gb Ethernet is trunking or bonding 1 Gbps Ethernet links using the link aggregation control protocol (LACP). In this way, one can create multigigabyte Ethernet connections to the host, between switches or to arrays that haven't yet been upgraded to 10 GbE. This helps address the "Goldilocks problem" where Gigabit Ethernet is too slow but 10 Gbps Ethernet isn't yet attainable.

Fibre Channel over Ethernet (FCoE) brings together the Fibre Channel and Ethernet worlds and promises better performance and greater flexibility. Although one would assume that the 10 Gbps Ethernet links used by FCoE would be 20% faster than 8 Gbps FC, the difference in throughput is an impressive 50%, thanks to a more efficient encoding method. FCoE also promises reduced I/O latency, though this is mitigated when a bridge is used to a traditional Fibre Channel SAN or storage array. In the long term, FCoE will improve performance, and some environments are ready for it today.

10. Add cache

Although the quickest I/O request is one that's never issued, as a means of speeding things up, caching is a close second. Caches are appearing throughout the I/O chain, promising improved responsiveness by storing frequently requested information for later use. This is hardly a new technique, but interest has intensified with the advent of affordable NAND flash memory capacity.

There are essentially three types of cache offered today:

  1. Host-side caches place NVRAM or NAND flash in the server, often on a high-performance PCI Express card. These keep I/O off the network but are only useful on a server-by-server basis.
  2. Caching appliances sit in the network, reducing the load on the storage array. These serve multiple hosts but introduce concerns about availability and data consistency in the event of an outage.
  3. Storage array-based caches and tiered storage solutions are also common, including NetApp's Flash Cache cards (formerly called Performance Acceleration Module or PAM), EMC's Fully Automated Storage Tiering (FAST) and Hitachi Data Systems' Dynamic Tiering (DT).

Still no silver bullet

There are many options for improving storage performance, but there's still no single silver bullet. Although storage vendors are quick to claim that their latest innovations (from tiered storage to FCoE) will solve data storage performance issues, none is foolish enough to focus on just one area. The most effective performance improvement strategy starts with an analysis of the bottlenecks found in existing systems and ends with a plan to address them.

BIO: Stephen Foskett is an independent consultant and author specializing in enterprise storage and cloud computing. He is responsible for Gestalt IT, a community of independent IT thought leaders, and organizes their Tech Field Day events. He can be found online at GestaltIT.com, FoskettS.net and on Twitter at @SFoskett.

This was first published in July 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: