New connections: SAS and iSCSI HBAs

Serial-attached SCSI and iSCSI host bus adapters (HBAs) represent the latest in server-to-storage connectivity technologies. Tailored to specifically address the needs of two emerging storage protocols, these new HBAs can ensure that performance isn't sacrificed when one of these alternatives to Fibre Channel storage is deployed.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Backup overhaul: From a mainframe to an open-systems environment:

New server-to-storage host bus adapters offer significant performance gains, but there are some pitfalls to avoid.

Out with the old, in with the new: Serial-attached SCSI (SAS) and iSCSI host bus adapters (HBAs) are the new vanguard in server-to-storage connectivity. SAS HBAs are poised to replace parallel SCSI HBAs, while iSCSI HBAs offer companies the option to use an Ethernet storage network in lieu of a more costly Fibre Channel (FC) SAN for all of their servers.

Of course, SAS and iSCSI HBAs address different corporate storage connectivity needs. SAS HBAs help eliminate storage I/O throughput bottlenecks to internal or direct-attached storage (DAS) while expanding the number of storage devices a single server can address to more than 16,000. iSCSI HBAs open the door for organizations to connect high-performance servers to their Ethernet storage network.

A single SAS HBA card such as LSI Logic Corp.'s LSISAS3080X-R contains eight separate ports for I/O. Each port can operate at 3Gb/sec at half duplex and can concurrently communicate with both SAS and SATA hard drives. New iSCSI cards, such as Alacritech Inc.'s SES2100 and QLogic Corp.'s QLA4050C, improve TCP/IP offloading by separating the overhead from the server's CPU, which helps Ethernet networks meet the stringent I/O and throughput requirements of high-performance servers.

Why SCSI won't go away

Multiple factors are driving companies to move from SCSI to serial-attached SCSI (SAS) connectivity for their internal and DAS requirements. EMC Corp., for example, no longer offers SCSI front-end connectivity for its Symmetrix and Clariion storage arrays, and a number of storage vendors plan to end SCSI connectivity on their disk drives sometime in 2008. SAS host bus adapters (HBAs) also offer significant performance benefits over SCSI, supporting up to eight separate channels that operate at 3Gb/sec as opposed to 320MB/sec on a shared parallel SCSI bus.

Despite these limitations, SCSI HBAs remain a necessary component in enterprises. Key reasons SCSI HBAs will stick around include:

  • They're still needed for tape drives and other types of mass storage devices. Most vendors' SAS tape drives are still in development or just coming to market.
  • Parallel SCSI HBA components are mature and work.
  • Some older storage devices use only SCSI interfaces.

But despite these benefits, new iSCSI and SAS HBAs have their downsides. For example, SAS HBAs don't offer N_Port ID Virtualization (NPIV); as a result, a storage administrator must use one SAS HBA worldwide name per server. And if you need to reuse existing network-attached SCSI or FC storage devices with new iSCSI or SAS HBAs, you'll need a bridge or router for the SAS or iSCSI HBA to communicate with those legacy storage devices.

The need for speed

The need for higher performance drove Tim Bolden, president of Cary, NC-based iGlass Networks, to adopt SAS HBAs. His primary application server using a SCSI HBA and DAS was often I/O bound with queue depths staying at or exceeding five during peak times, which resulted in application slowdowns. This problem was especially visible because iGlass Networks provides real-time, outsourced network-monitoring solutions to its customers and the delays in processing the incoming alerts were encroaching on customer service-level agreements (SLAs).

Bolden was initially tempted to split the application and run it on four different servers to meet customer response time requirements. Instead, he purchased a new four-channel LSI Logic LSISAS3442E-R HBA along with Solid Access Technologies LLC's USSD 200SAS solid-state disk (SSD) array. Although the USSD 200SAS array played a big role in delivering improved application performance, the LSI Logic SAS HBA gave Bolden's application server four separate, dedicated 3Gb/sec paths to the disk on the USSD 200SAS array vs. the one shared 320MB/sec path he previously had to his SCSI-attached array. "By using a SAS HBA on the server I already had, my queue depth times [were reduced] to one on my existing server," says Bolden. This allowed him to "meet customer SLAs and avoid buying four new servers."

iSCSI HBAs can also improve I/O performance on Ethernet LANs. Both Alacritech network interface cards (NICs) and QLogic iSCSI HBAs include a TCP/IP offload engine (TOE) to handle the processing of TCP/IP and iSCSI traffic, although there are substantial differences between iSCSI cards.

Alacritech offers two types of iSCSI NICs--TOE NICs (TNICs) and iSCSI accelerators. Its SEN2000 and SEN2100 TNICs offload the entire TCP/IP stack, including iSCSI traffic from the server CPU onto the HBA, allowing them to accelerate all TCP traffic. This includes iSCSI as well as NFS or CIFS. However, Alacritech's SES2000 and SES2100 NICs offload only iSCSI traffic and leave behind all other TCP traffic for the server CPU to manage.

Alacritech NICs can send and receive any kind of TCP/IP traffic because they don't natively support initiators, a prerequisite on cards classified as HBAs. An initiator is the software that negotiates the connection between the controller and the storage target. Instead, Alacritech relies on the software initiator provided by Microsoft Windows to make this connection, while the QLogic iSCSI HBA uses a hardware initiator that's part of the HBA. Because of this design, QLogic HBAs support only iSCSI-based TCP/IP traffic and not other TCP/IP traffic like SMTP or FTP.

Compared to normal NICs, iSCSI cards from either vendor will deliver a quantifiable performance increase when doing iSCSI processing. Chris Sims, a network engineer with Clayton County Water Authority in Morrow, GA, found that using a QLogic QLA4052C iSCSI HBA on his VMware ESX Servers connected to an EqualLogic Inc. PS300 storage array resulted in a 10% to 20% improvement in host CPU performance vs. normal NIC cards.

Alacritech NICs and QLogic iSCSI HBAs each handle this TCP offload differently. QLogic iSCSI HBAs offload and handle all TCP functions natively, while Alacritech NICs leave a small percentage (generally 1% or less) of the TCP processing--error handling and connection management--to the host CPU.

To improve performance, Alacritech NICs handle the TCP offload by performing two functions similar to the way QLogic iSCSI HBAs handle the chore. First, Alacritech NICs use an ASIC to expedite the handling of TCP instructions. QLogic iSCSI HBAs use a hardware logic-based TCP/IP offload engine with two embedded specialized processors that mainly expedite the iSCSI offload.

The second way Alacritech NICs speed data movement is by moving incoming data directly into the application buffer. Traditional NICs receive data into their buffers, which the host CPU then copies to a network buffer reserved in the host's memory. The host CPU then removes the iSCSI packaging around the data and moves the data from the host's network buffer into the host's reserved application buffer where the application can access it.

QLogic iSCSI HBAs don't eliminate these two data moves; however, they eliminate some of the host CPU overhead because the data is already SCSI block data on the iSCSI HBA and can be moved into the application buffer without requiring the host CPU to remove the iSCSI packaging. The ASIC on Alacritech NICs eliminates all of these steps that the host CPU must normally take by moving the data directly from the network card buffer to the buffer on the host reserved for the application. More HBA vendors are beginning to use ASICs to expedite the processing of TCP traffic. QLogic recently announced that it intends to license Alacritech TCP offload patents and use Alacritech ASICs in a future generation of QLogic 10Gb/sec TOE iSCSI HBAs.

Bumps in the road

With new technologies come the typical bumps in the road that can take hours, days or even weeks for users to diagnose and fix. For example, users of QLogic's QLA4050C iSCSI HBA may encounter an "error code 10" message when they try to load the HBA's driver. QLogic found that this error code stemmed from the inability of the BIOS on new server systems to read the BIOS of their iSCSI HBAs. To correct the problem, you must install a new driver available through QLogic's support Web site (after first opening a call with QLogic support).

For users looking to connect high-performance servers to an iSCSI SAN, diagnosing performance problems with iSCSI HBAs becomes a more complicated and time-consuming proposition. Two features that are critical to tuning performance in an iSCSI SAN are the iSCSI card's flow control and jumbo frame settings.

In iSCSI NICs and HBAs, the flow-control feature is typically turned on by default to improve performance and control the flow of data between the iSCSI HBA and an Ethernet switch. The purpose of flow control is to enable lower speed devices to communicate with higher speed ones, and to monitor the flow of traffic from the HBA to the Ethernet switch.

InfiniBand HCAs

InfiniBand host channel adapters (HCAs) are a less-than-ideal replacement for other types of storage connectivity. While InfiniBand HCAs offer the highest bandwidth (up to 10Gb/sec) and lowest latency of any HBA, and provide a single interface for all network connectivity and storage, the use of InfiniBand HCAs is largely limited to high-performance clusters for server interprocess communication. Among the reasons cited by users for not adopting the technology are lack of familiarity with the protocol and little vendor storage support.

Some vendors are testing the InfiniBand waters. Cisco Systems Inc.'s InfiniBand HCA supports the SCSI Remote Protocol, which encapsulates SCSI commands in the InfiniBand protocol. These commands are then sent through the InfiniBand HCA to an InfiniBand-to-Ethernet or InfiniBand-to-Fibre Channel (FC) gateway. The gateway converts the SCSI commands into TCP/IP or FC frames, and then sends then to the appropriate storage device.

Three factors on the Ethernet switch can influence how well the iSCSI card performs flow control. The first is the Ethernet switch that controls the flow-control feature. This feature must be enabled on a port-by-port basis by network administrators, as flow control is typically turned off by default on most vendors' Ethernet switches.

The second possible problem area is how flow control is enabled on the Ethernet switch port, which tells the iSCSI HBA when to stop and start the flow of traffic. Ideally, the switch port should only signal the HBA to stop the flow of traffic when the switch port's memory buffer is nearly full and then restart the flow when its buffer is nearly empty, although not all network switches handle it that way. Richard Palmer, a senior technical instructor at EqualLogic, has encountered some Ethernet switches that signal the HBA to stop and start traffic within an I/O rather than wait for the entire switch port buffer to fill up.

The size of the memory on the Ethernet switch port buffer, and whether it's shared or dedicated, is the third factor affecting performance. If memory is shared among ports on a network switch and a busy iSCSI card fills up the buffer, it will affect the throughput of all iSCSI cards attached to those ports because the Ethernet switch will instruct those iSCSI cards to throttle back their performance until it empties the buffer.

If jumbo frames are turned on in the iSCSI card, this may aggravate the situation. Jumbo frames are nine times the size of a regular TCP packet, and can accelerate operations that require large sequential read-and-write operations like backups by a factor of nine. However, if jumbo frames cause the memory on an Ethernet switch to continually fill up, flow control will kick in and could have the opposite effect, slowing performance by the same factor of nine.

SAS' shortcomings

SAS HBAs present their own set of management issues. Despite support within the SAS protocol for more than 16,000 storage devices in a SAS domain, the most storage devices that the firmware on many SAS HBAs can address at one time is a few hundred.  Zoning and disk drive security are other functions that are available only in rudimentary forms in SAS domains at this time. To ensure SAS HBAs can access only assigned LUNs, administrators must first assign LUNs to a port on a SAS array, control access to the LUNs by using partitions or LUN masking on the SAS storage array, and then implement zoning on the SAS expanders. (There are two types of SAS expanders: the edge expander and the fan-out expander. A SAS edge expander, which plugs into the SAS port on a server, will support up to 128 devices without enhancement over distances of up to eight meters. A fan-out expander, which offers more routing capability and other features at a higher price, can also connect up to 128 devices.)

LSI Logic recommends that users limit the number of zones on SAS expanders to no more than three, as the expanders must arbitrate the flow of data to the different storage devices attached to them. The number and ability to create SAS zones is limited to fan-out expanders and hybrid expanders (combination of fan-out and edge expanders), although some of these zoning limitations are addressed in the current working draft of the SAS-2 specification.

Another potential concern is how SAS HBAs communicate with SAS and SATA disk drives. Although SAS HBAs provide the potential to connect SAS and SATA disk drives on the same channel, SAS and SATA disk drives have different interfaces and respond to commands from SAS HBAs differently.

Mark Miquelon, LSI Logic's director of HBA products, Storage Components Group, says that when an LSI Logic SAS HBA communicates with SAS disk drives, the SAS HBA can operate in full duplex mode, allowing it to send commands and receive data simultaneously. To drive multiple SAS disk drives at the same time, command queuing in the SAS HBA becomes a big factor. This allows an initiator to load a SAS disk drive with commands, close that connection, go on to the next drive and load that one with commands. "The individual SAS drives can then come back and request a new connection to the initiator when they have data or a response," says Miquelon. "This prevents the initiator from getting tied up while the drive is working on the command from the initiator."

However, SAS HBAs talk to SATA disk drives differently. Many SATA disk drives require a connection to remain in place at all times and therefore don't allow the disconnect/reconnect cycle permitted by SAS drives. These communications between SAS HBAs and SAS and SATA disk drives become more complicated with the introduction of a SAS expander between the HBA and drives.

A final concern that administrators need to address is interoperability among SAS HBAs, expanders and disk drives from different SAS vendors. Adaptec Inc., a SAS HBA vendor, recommends users proceed with caution when intermixing SAS devices from different vendors. "Users should view SAS as a single-vendor play for now and check compatibility matrixes before implementing different vendors' products in a single SAS domain," says Paul Vogt, Adaptec's director of product marketing.

SAS HBAs, iSCSI HBAs and NICs are beginning to make inroads into the traditional DAS and SAN domains of SCSI and FC. It's best to implement these products in small, controlled environments when existing technologies like SCSI and FC are either too slow or too expensive. With more storage devices such as real and virtual tape drives adding support for SAS and iSCSI, new specifications like SAS-2 in development and faster processors to handle the overhead associated with 10Gb/sec Ethernet networks, expect SAS and iSCSI HBAs to become increasingly popular alternatives to SCSI and FC in the coming year.

This was first published in April 2007

Dig deeper on SAN switch

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close