HBAs: Why some are better than others

The storage chain is only as strong as the weakest HBA. Here's how to spot the strong ones.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Are your data storage costs too high?:

The host bus adapter (HBA) is the Rodney Dangerfield of the storage area network (SAN). It gets no respect. An often unconsidered part in the deployment and adoption of enterprise SANs is the ubiquitous Fibre Channel (FC) HBA found in servers and storage arrays alike. But buyers beware: All HBAs aren't alike, and they are starting to incorporate some pretty sophisticated technology and features.

SAN connectivity
While SAN deployments today are frequently associated with fiber optic, the Fibre Channel (FC) specifications don't require fiber optic cabling. All HBA vendors support and provide copper connector kits that generally cost up to $200 less than their fiber optic HBA counterparts.

While copper connectivity may seem like a small detail, it permits an organization that lacks the finances or personnel with fiber cabling expertise to enter the SAN space. While copper is a robust solution, a copper deployment does have its limitations. First, copper has a 100MB/s limitation and there doesn't appear to be a current upgrade path for this technology. Second, it can only cover a maximum distance of around 100 ft., so it won't provide the same flexibility in distance that comes with a Fibre solution.

With the increased emphasis in today's environment to do more with less, HBAs provide a logical, cost-effective place to start. Every device on a SAN--absent the switch--requires an HBA. With most HBAs now sporting a retail price between $1,000 and $3,000, the value one vendor's HBA may offer to your organization over another may well justify the additional investment or savings, depending on what functionality you may need or want. But before we get into how they differ, and what features HBA vendors plan for the future, it's important to list what all HBAs have in common.

Most HBA buyers know they offer either 1Gb or 2Gb speeds (see "Decoding conventions"). They know that the HBAs have to work with the existing bus architecture on their servers and have to support some common FC protocols. Additionally, they realize that HBAs use an unconventional naming scheme, have an onboard I/O processor and understand how HBAs physically connect to the SAN.

Bus speeds
Another commonly understood design fact relates to the bus architecture of the server the HBA fits into, which is a prerequisite to buying the HBA because it has to be either SBUS, PCI or PCI-X. But what are you truly getting--or not getting--when you buy one of these architectures and what are the advantages of each?

The SBUS architecture was designed by Sun Microsystems, Inc. in 1989 and was the standard I/O interconnect for Sun computers for nearly the next decade. Beginning in July 1997, Sun moved to the more commonly accepted PCI bus starting with its UltraSPARC computers. SBUS HBAs now primarily exist to support the many older Sun servers and their underlying SBUS architectures.

Currently, three vendors--Emulex, Costa Mesa, CA, JNI in San Diego, CA, and QLogic, Aliso Viejo, CA--support this technology, and none of the vendors contacted appear to have any plans for further development of this technology. In fact, the underlying SBUS architecture won't support the higher throughput inherently offered in the 2Gb FC protocol due to the architectural design limitations in SBUS, therefore additional development is pointless.

This contrasts with the current PCI architecture. While initially designed as a 32-bit 33MHz PCI bus, many cards now offer 64-bit 66MHz speed. The capabilities of this bus--even with the higher bit count and speeds--began to be stretched by the late 1990s by the increasingly high bandwidth needs of many of today's applications. While still prevalent, the PCI bus has rapidly been giving way to the newer PCI-X technology.

The PCI-X bus technology currently addresses many of the needs of today's high bandwidth, high performance environments. It's a 64-bit solution that can perform at up to 133MHz/s with effective throughput of over 1Gb/s on the server bus. As a result, it delivers nearly a 10-fold performance increase over the standard PCI bus. All HBA vendors included in this article offer a product that supports this architecture and should be backwards compatible with the older PCI interfaces.

Yet QLogic's vice president of marketing, Frank Berry, cautions potential buyers of PCI-X technology that this backwards compatibility with PCI isn't necessarily a given. He says there are two voltage levels which the PCI-X architectures operates at. If the voltage at which the PCI-X HBA differs from that of the PCI bus architecture on the server, the PCI-X HBA won't work.

A feature you'll find on every HBA and what gives an HBA its intrinsic value is its onboard I/O processor which offloads the block-level storage I/O from the server's CPU onto the HBA. This is the primary value of an HBA because block-level storage I/O activity is extremely CPU-intensive.

LSI Logic, Milpitas, CA, believes its adapters have an edge in this space, by providing interrupt coalescing, which can be tuned per system and application environment. By offloading the server I/O processing to the HBA, it permits almost any servers--regardless of its CPU capabilities--to join the SAN. This CPU offloading differs from network interface cards (NIC) which primarily rely on the server's CPU for processing the network traffic. While NIC manufacturers are currently working on what is called a TCP/IP Offload Engine (TOE) to simulate this functionality in HBAs, HBAs currently provide the most proven method for deploying this CPU offload technology.

Decoding naming conventions
For a newcomer to the SAN environment, HBA naming conventions can quickly become confusing. For example consider the 1Gb and 2Gb naming convention. Paul Spagnolo, an IBM field engineer, says it's critical to distinguish between gigabit (Gb) and gigabyte (GB). Fibre Channel (FC) protocols, he explains, measure their performance in gigabits, not gigabytes. Yet FC speeds are referred to in megabyte (MB)/sec. Why?

The current FC transfer speeds of 100MB/s and 200MB/s are arrived at in the following manner: FC uses a common encoding scheme call 8B/10B encoding. In this way, two extra bits are added to the 8-bit packet making the packet 10-bits in length. The 100MB or 200MB performance numbers are arrived at by dividing the 1Gb and 2Gb speeds by the 10-bit packet. Half duplex 1GB with the 8B/10B encoding scheme now becomes 100MB and full duplex 1Gb FC links could potentially support 200MB, if the application could support and sustain that speed.

While this may seem like a small point, this knowledge comes to bear when sizing performance throughput on the SAN. If one begins with the false design assumption that you have 1GB or 2GB throughput as opposed to 1Gb or 2Gb, you may be setting yourself up for bottlenecks to appear in your SAN design.

A more confusing concept in the SAN space is the naming schema used by HBAs, the World Wide Name (WWN). While it may sound confusing, it corresponds almost exactly to the Media Access Control (MAC) address found on every network interface card (NIC). The WWN is a hard-coded 16 character hexadecimal address that's unique to--and appears on--each HBA which is similar to the MAC address that uniquely identifies each NIC.

So why do you hear so much about WWN and so little about MAC? Simply because SAN technology has not yet matured to the point that LAN/WAN technology has. What happened in the LAN/WAN space is network protocols, and specifically TCP/IP, overlaid and virtualized the embedded MAC address to allow routing and a more intelligent, manageable network design. The MAC address still exists but TCP/IP masks the underlying hard coded MAC address and makes it transparent to the end user. While this sort of technology may emerge in the SAN space, it has not yet occurred. So the WWN term--when used--still does more to confuse the topic of SANs than clarify it.

Choices
So, what's the best HBA for your environment? The first key area of functionality where HBAs diverge is in their current ability to provide dynamic load balancing across multiple HBAs. One of the key areas of concern within SANs is high availability. To achieve high-availability requirements, redundant paths to the storage are deployed using either two HBAs or two HBA connections in a server.

Until recently, the only way to dynamically load balance the storage traffic across HBAs was to use a proprietary HBA driver supplied by the storage array vendor. While this isn't a problem if the server only utilizes storage from only one storage array vendor, it becomes an issue if the server has access to storage from vendors of different storage arrays concurrently.

 

How HBA products compare
  COMPANIES
  ATTO EMULEX JNI LSI QLOGIC
1GB
2GB
Dual Port Cards
Quad Port Cards      
SBUS Architecture    
PCI Architecture
(32- or 64-bit)
PCI-X Architecture
Context Switching  
Active-Active
Dynamic Pathing
 
Active-Passive
Dynamic Pathing
Copper Connectivity
Fiber Optic Connectivity
Maximun 2Kb
Buffer Credits
4 64 16 16 32
Fail-over Software  
Switched Fabric Protocol
Arbitrated Loop Protocol
Point-to-Point Protocol
*QLogic offers Active-Active but not by default


Load balancing 
There's one critical question regarding HBA load-balancing capabilities: Is their dynamic load-balancing driver software designed to support an Active-Active or an Active-Passive configuration? In an Active-Active configuration, the I/O traffic is equally shared down both paths into the SAN. In an Active-Passive configuration, one card-the Active card-carries the brunt of the I/O traffic to the SAN while the passive card waits for either busy times or the other card to fail before it carries traffic. Although not every HBA vendor currently offers this feature, Emulex, JNI, and LSI Logic offer this functionality in an Active-Active configuration. Emulex released this functionality in late October 2002 in its MultiPulse technology. In addition to load balancing, the MultiPulse technology also offers instantaneous rerouting of traffic around a failed element and protection of application availability. JNI offers this function with their FibreStar and Zentai line of HBAs and LSI Logic's driver has the unique ability to actively load balance across as many of its cards as the host system can populate without restrictions.

Atto Technology Inc., Amherst, NY, plans to offer this feature in Q1 of 2003 though they're still debating between an Active-Passive or Active-Active deployment. QLogic offers the Active-Passive option as the default driver install though the Active-Active driver is available as an optional driver install.

The result of deploying this technology from the HBA vendor is that it may help to reduce or eliminate your reliance on your a single storage array vendor for your storage and their proprietary driver to deploy a dynamic load-balancing solution on the SAN.

Yet deploying the vendor's HBA load-balancing solution has at least two downsides. One, it may result in some loss of functionality that a storage array vendor's load-balancing driver may provide, such as Oracle database awareness on the storage array. The other is the lack of standards.

Another feature closely associated with dynamic load balancing is failover functionality. It differs slightly from dynamic load balancing because if one HBA or its path to the storage should fail, the other HBA transparently takes over without interruption to the application. This functionality is a requirement in any high availability environment, and should be considered a prerequisite prior to deploying any SAN solution in these environments. Again, when choosing an HBA for this functionality, exercise some judgment.

Currently most HBA vendors including Emulex, JNI, LSI Logic, and QLogic offer this functionality in their driver software for some of their HBA product line. Atto Technology will begin to offer it in early 2003. However, not all vendors necessarily offer this for all OS platforms.

Veritas probably has the most hardware-storage or HBA-vendor-neutral solution with their Dual Multipathing (DMP) software. It works on most major operating systems with any storage array and with a combination of different vendor's HBAs. However, in order to utilize this piece of software, you need other Veritas software running on that server, possibly diminishing some of the desirability of their solution.

For the OS side of the house, both Novell and Microsoft have a failover driver for the current releases of their respective platforms. The major Unix players--Sun, AIX, Linux, and HP-UX to name a few--vary on their ability to offer a native failover driver so administrators need to check on the version of their Unix OS if they desire to use this feature natively in the OS. For the hardware side, most of the major storage vendors--EMC, Hitachi Data Systems, Hewlett-Packard, IBM, and Xiotech--also ship HBA drivers that provide this failover capability, but the availability of these drivers varies by OS and only works when connecting to their storage solution.

In analyzing all of the solutions that failover and dynamic load-balancing offer, the hardware vendors seem to have more robust solutions than the ones provided by either the software or HBA vendors. The hardware vendors frequently offer the dynamic load balancing as well as the failover, plus some other nice extras for troubleshooting SAN and application performance problems.

Right now, no one source currently exists where you may obtain a driver that provides failover capability that works regardless of the OS, storage or HBA. You'll need a good understanding of the environment into which you are deploying it, so you can select the correct HBA driver to get the functionality you desire.

The ABCs of HBAs
A is for Adapter Fibre Channel (FC) HBAs support both 1Gb and 2Gb speeds. The main bus architecture today is PCI, although it appears PCI-X will replace it in 2003 while SBUS is history. Almost without exception, all HBAs shipping today support the main FC protocols. The HBA naming convention of using World Wide Names (WWNs) is usually initially misunderstood until one realizes that it's the unique identifier of the HBA. And when it comes to connecting to a SAN, HBAs may use either copper or fiber optic cabling for connectivity.

B is for Bias HBA vendors try to differentiate their products mainly by features and price. For example, Atto Technology Inc.'s HBAs generally lack cutting edge features, but cost less. Emulex places a high emphasis on HBAs that have features supporting the increasingly higher speeds and distances SANs cover. JNI places more emphasis on intelligent switching within the HBAs, while LSI Logic touts Interrupt Coalescing, which allows you to tune for system/application performance while lowering server CPU requirements. Meanwhile, QLogic has built more functionality into their HBAs to better manage the new protocols that already run on FC such as FC Tape and virtualization.

C is for Conceptual Some new technologies are emerging within the SAN space while others are fading into the background. InfiniBand is one technology that glitters but is no longer gold in the eyes of the vendors. WWN spoofing appears headed for prime time in 2003 which will likely prompt the rise of more security in the SANs as well. Adding standards-based SAN management capabilities also hits close to home with all of the vendors. Their customers are no longer just asking for this functionality but are in some cases demanding compliance with emerging open standards. Yet without exception every vendor sees this as a positive as they view this opening more markets for their products.

Data buffers
Another feature that's built into all HBAs but at different capacities is data buffers and the related buffer credits. Emulex believes the importance of data buffers has risen for three reasons: The speeds of the HBAs have increased, the distance data must travel on SANs has increased and more environments have more heavily loaded PCI buses. Emulex says that without a large data buffer, a heavily loaded PCI bus could force the FC link to stop delivering data until the congestion clears.

Since one of the driving reasons for FC-SANS was FC's ability to handle large block transfers, factors such as distance or speed increases and both need to be weighed if the HBA is deployed into such an environment. Here's where Emulex claims they have an edge over their competitors. They have built a buffered architecture into their cards that supports up to 64 buffer credits of 2Kb each, which is significantly more compared to another vendor such as LSI Logic, whose PCI-X chips only contain 16 2Kb frame buffers.

As SANs move to 2Gb/s, Emulex alleges twice as many buffer credits are needed to achieve the same link utilization since the data transmit time is cut in half. They believe this will only be further amplified as FC speeds move to 4Gb, and then 10Gb. In long-distance SANs, higher buffer credits are also needed to accommodate the longer latency periods between the time the data is sent and received. In these circumstances, buffer credits on an HBA may become a deciding factor in the selection of one HBA over another in these environments.

Context switching
JNI is the primary vendor promoting this context switching. They define context as "the processing state and data memory structure used to execute a SCSI or FC protocol command." This means that the header information of the FC protocol carries information about the FC frame and its content. This becomes important if the HBA is to intelligently recognize the frame header information and act on that information.

According to JNI, you don't have a SAN-ready adapter if it can't perform this function. JNI defines a SAN-ready adapter as one that can route a FC header within five to 10 microseconds. In order to route this header in that time frame, it actually has to act on the FC packet prior to receiving the entire packet off of the FC link. For an HBA to do this, it must intelligently recognize when a context switch takes place.

Emulex and LSI obviously see merit in this approach as their cards also utilize this technology. LSI's chips optimize inbound frame data flow by doing context switching in the hardware. Their firmware completes processing of the current inbound frame while the hardware reads the FC frame header and establishes the context for the next inbound frame. Emulex believes they have an advantage here because they feature the largest hardware context cache in the industry enabling the processing of up to 2,048 simultaneous I/Os immediately. By using their onboard HBA cache as opposed to the server cache, their HBAs minimize server interrupts, CPU utilization and PCI bandwidth consumption.

Future directions
What features can customers expect HBA vendors to ship next? At Storage Networking World in Orlando, FL, Brocade and Emulex announced they would be working together to deliver centralized SAN management and enhanced security by extending the intelligence in the SAN fabric to include both the switching infrastructure and the HBAs. The two companies also plan to integrate joint functionality into the switch and HBA and architect and implement the Fibre Channel Authentication Protocol (FCAP) to include HBAs. In addition, third-party software developers will be able to write to the switch and HBA through the Brocade Fabric Access API, instead of developing for the HBAs and switches separately.

QLogic says it will have multiple announcements about HBAs and virtualization as well. InfiniBand connectivity has been relegated to the back burner for now--QLogic said other (albeit unnamed) projects have been put in front of InfiniBand.

As opposed to InfiniBand, a hot topic for HBA vendors is the spoofing of World-wide Names (WWNs), which is similar to assigning a network or IP address to a NIC card. QLogic already provides this spoofing functionality on a custom basis and anticipate spoofing to be a routine part of their standard driver package sometime in 2003. Atto would only say they were working with OEMs on this technology, and LSI Logic said that if it did deploy this technology it would be at the chip firmware level.

While WWN spoofing has management and routing benefits, it also opens up a Pandora's box on the SAN, similar to what has occurred in the IP networked world. Since WWNs would be assigned with human intervention, a logical question to ask is what happens if a server attempts to connect to a SAN where it's using the same WWN as another server and, in so doing, gains access to another server's data? Depending on how your SAN is set up and what security measures are deployed, the results could vary from nothing to losing data.

When the vendors were asked about data encryption and what security was being built into the SAN to prevent this from happening, they all essentially pleaded the industry's version of the Fifth by saying they're bound by nondisclosure agreement. Atto Technology said they view security as very important and are trying to sort out what security should reside on the HBA. QLogic's and LSI Logic's answer was equally as vague, almost echoing Atto's words. All vendors except Emulex did cite it as important, and indicated they saw it as a feature, though QLogic primarily envisioned it as a feature in their iSCSI cards.

A role HBA vendors were more willing to discuss was their part in SAN management. The major open initiatives are SNIA'S common interface model (CIM), which is heavily sponsored by Sun, Microsoft's Web-Based Enterprise Management (WBEM) and the Bluefin initiative being heavily driven by EMC and Hp. All of these forums are being supported and participated in by the HBA vendors contacted. But more important than just participation, the HBA vendors are adhering to customer demands to deploy these standards in their products. With the next generation of SAN management and security rapidly evolving, companies can expect to find more ways to manage and improve their SANs through the HBA: this seemingly unspectacular component of the SAN.

Web Bonus:
Online resources from SearchStorage.com: "Ask the Expert: Multiple HBAs," by Chris Poelker.

This was first published in December 2002

Dig deeper on SAN switch

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close