Feature

New connections: SAS and iSCSI HBAs

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Backup overhaul: From a mainframe to an open-systems environment."

Download it now to read this article plus other related content.

InfiniBand HCAs

Requires Free Membership to View

InfiniBand host channel adapters (HCAs) are a less-than-ideal replacement for other types of storage connectivity. While InfiniBand HCAs offer the highest bandwidth (up to 10Gb/sec) and lowest latency of any HBA, and provide a single interface for all network connectivity and storage, the use of InfiniBand HCAs is largely limited to high-performance clusters for server interprocess communication. Among the reasons cited by users for not adopting the technology are lack of familiarity with the protocol and little vendor storage support.

Some vendors are testing the InfiniBand waters. Cisco Systems Inc.'s InfiniBand HCA supports the SCSI Remote Protocol, which encapsulates SCSI commands in the InfiniBand protocol. These commands are then sent through the InfiniBand HCA to an InfiniBand-to-Ethernet or InfiniBand-to-Fibre Channel (FC) gateway. The gateway converts the SCSI commands into TCP/IP or FC frames, and then sends then to the appropriate storage device.

Three factors on the Ethernet switch can influence how well the iSCSI card performs flow control. The first is the Ethernet switch that controls the flow-control feature. This feature must be enabled on a port-by-port basis by network administrators, as flow control is typically turned off by default on most vendors' Ethernet switches.

The second possible problem area is how flow control is enabled on the Ethernet switch port, which tells the iSCSI HBA when to stop and start the flow of traffic. Ideally, the switch port should only signal the HBA to stop the flow of traffic when the switch port's memory buffer is nearly full and then restart the flow when its buffer is nearly empty, although not all network switches handle it that way. Richard Palmer, a senior technical instructor at EqualLogic, has encountered some Ethernet switches that signal the HBA to stop and start traffic within an I/O rather than wait for the entire switch port buffer to fill up.

The size of the memory on the Ethernet switch port buffer, and whether it's shared or dedicated, is the third factor affecting performance. If memory is shared among ports on a network switch and a busy iSCSI card fills up the buffer, it will affect the throughput of all iSCSI cards attached to those ports because the Ethernet switch will instruct those iSCSI cards to throttle back their performance until it empties the buffer.

If jumbo frames are turned on in the iSCSI card, this may aggravate the situation. Jumbo frames are nine times the size of a regular TCP packet, and can accelerate operations that require large sequential read-and-write operations like backups by a factor of nine. However, if jumbo frames cause the memory on an Ethernet switch to continually fill up, flow control will kick in and could have the opposite effect, slowing performance by the same factor of nine.

This was first published in April 2007

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: