Managing and protecting all enterprise data


Down-to-earth HBAs

Finally, there's an HBA for everyone. Vendors are creating HBAs that are functional and affordable, even for smaller shops.

How to better manage HBAs
With so much time spent managing host bus adapters (HBAs), here are some tips to minimize the hassles:

Configure it right the first time. Update the firmware and driver software as soon as you install it and set the HBA defaults to your company's standards.
Use third-party storage resource management (SRM) tools for reporting. Almost every SRM tool will provide basic HBA information such as the driver level, firmware, manufacturer and number of LUNs accessed by the HBA. You need this information to track which HBA is doing what.
Start testing and using vendor-provided failover and load balancing drivers. If you are not already using an HBA driver provided by a storage array vendor such as EMC Corp. or IBM Corp, the HBA vendors' drivers provide similar functionality without any storage array dependencies and will help increase a server's availability and performance.
If it ain't broke, don't fix it. Obviously, one wants to stay current with regular maintenance updates, but with so many things that can go wrong with HBAs--especially on older servers running older operating system versions--it's best to leave things the way they are instead of applying every new firmware or driver update that comes out.
The system administrator executes the command, crosses his fingers and hopes for the best.

A few seconds later, the computer returns a response and the administrator relaxes. The server's host bus adapter (HBA) has discovered the newly assigned LUNs without another midnight reboot.

After years of managing HBAs with a hope and a prayer, the tasks of installing, configuring and managing HBAs are finally getting easier. And just in time. With storage area networks (SANs) moving out of Fortune 500 companies and into small and medium-sized businesses (SMBs), the traditional hassles long associated with HBAs won't fly in these smaller shops. By mimicking the same principles that drove the widespread adoption of IP network cards, HBA vendors are working hard to help HBAs shed their image of being hard to configure and manage.

Inaccessible server-based management tools are an annoyance of the past. New HBAs built upon industry standards such as the Fabric Device Management Interface (FDMI), the Storage Management Initiative Specification (SMI-S) and the Storage Networking Industry Association's (SNIA) HBA API, come with better driver software and point-and-click install tools. Management software such as Emulex Corp.'s AutoPilot and LSI Logic Corp.'s MyStorage--along with light versions of Applied Micro Circuits Corporation (AMCC) EZ Fibre and QLogic Corp.'s SANsurfer--go a long way in making HBAs simpler to install, as well as easier to configure and manage. Improved HBA drivers for operating systems permit administrators to discover and configure new LUNs without reboots.

HBAs for SMBs
Driving the trend of easier HBA management has been the growth of SANs in SMBs. These storage environments require lower costs, minimal setup time and relatively simple maintenance because they lack the staff to specialize in the intricacies of SAN management. Depending on which vendor's HBA gets deployed, SMBs can expect the following benefits:

  • Lower HBA prices
  • HBA drivers that provide host-based mapping, failover and load balancing
  • Wizard-based configurations
  • Central management console for driver and firmware upgrades
  • Standards compliance
  • Ability to integrate with third-party SRM tools
These features will come at a price. Users should expect less:
  • Onboard memory
  • Operating system support
  • Protocol support
  • I/Os per second (IOPS)
  • Transactions per second (TPS)
  • Interoperability testing
Emulex entered the SMB market with the release of its LightPulse LP101 HBA and its AutoPilot management software. Aimed at the Windows and Linux server markets, the LP101 will only support the SCSI protocol, about 25,000 IOPS and 100 concurrent TPS. The price for the LP 101 is expected to start at under $500. Lower-priced HBAs will obviously not offer the same functionality as their higher-priced siblings. For example, Emulex's premium LP10000 supports all operating systems, a wide range of protocols including FCP, IP and FICON, up to 140,000 IOPS and thousands of concurrent TPS.

Emulex will offer new management software, AutoPilot, to complement its new LP101s that will function as an installation wizard, allowing for the quick setup and configuration of these new HBAs. Users needing failover and dynamic load balancing capabilities will still need to look to Emulex's MultiPulse software that works with their premium HBAs, supporting up to four of them.

HBAs Head to head

Are SMB HBAs good enough?
Existing host bus adapters (HBAs) offer a number of configuration options such as context switching, data buffers and data integrity fields. Yet some HBA vendors project that at most only about 50% of the users take advantage of any of these functions. Field engineers are even more pessimistic, estimating that fewer than 10% use these features.

As HBA vendors pare prices by stripping out infrequently used features, storage administrators need to ask if the next generation of less-costly HBAs will be good enough. The short answer is yes. The following questions will help you decide if these new HBAs will fit in your environment.

Is the HBA going into either a Windows or Linux server? If so, then an SMB HBA will likely meet the need. Exceptions may include where a server hosts a performance-intensive application or accesses storage at a remote site where latency comes into play.

Does your environment require storage vendor certification for support? If yes, steer clear of SMB HBAs. Much of the cost of premium HBAs today is tied to the interoperability testing they go through.

How large will your storage area network (SAN) environment be? While HBA vendors recommend deploying SMB HBAs into environments of 10 servers or fewer, their estimates usually tend to be conservative. Users will likely find that these HBAs will work satisfactorily in SANs with server counts of up to 20 or more.

Do you have other flavors of Unix in your environment? Environments running AIX, HP-UX, Sun Solaris or any others should avoid SMB-targeted HBAs for now, especially environments with enterprise and midsize servers.

Is it safe to buy right away? Yes. The new lower-cost HBAs will closely resemble--if not be identical to--the ones shipping now and the drivers will most likely be the same ones used in enterprises. Also, many of the hassles associated with Fibre Channel SANs and HBAs have been worked out and should be minimal in small environments.

LSI Logic's new MyStorage management software combines wizard-guided installation and path failover capabilities in one tool. The MyStorage software also lets you associate a specific storage device with a drive letter and remembers that mapping. This feature prevents storage devices from unexpectedly moving around within the operating system and prevents confusion or even application downtime. If storage devices are either added or removed during a server reboot, an application expecting data on drive "F" may now need to look on either drive "E" or "G" after the reboot.

While LSI Logic has yet to announce an HBA targeted at the SMB market, they plan to include all of the features found in their higher end cards in new HBAs at a lower price. Charles Krause, LSI Logic's HBA division director, says that now most of the cost of HBAs is not the components, but the cost of certification testing and document maintenance. Krause says that LSI Logic's new HBAs will be tested in more limited SAN environments with less complex configurations. For example, LSI Logic already offers their dual-ported FC LSI 7202P HBA for under $500 as an optional component for the Apple Power Mac G5, but qualification testing is limited to SAN components typically found in Mac OS shops.

As these new products are released, users should make sure that the HBA drivers and management software provide all of the features and functions they need. If you're looking to use two FC paths to disk, be sure the HBA driver supports your specific operating system as well as host-based LUN mapping, HBA failover and load balancing. If you want to be able to upgrade drivers and firmware remotely, make sure you understand how the different vendors accomplish these tasks, as some opt to do this over the FC network while others use the traditional IP network. (See "Are SMB HBAs good enough?")

The standardization of HBAs
Industry standards will simultaneously simplify HBA management and drive down HBA prices. FDMI and SMI-S 1.0.2 standards are already ratified and HBAs and routers are next in line for SMI-S compliance testing with results expected to be released in August or September.

Most major HBA vendors adhere to the SNIA HBA API 2.0, the oldest and most mature HBA standard. This "C" language API standard definition permits HBA software management tools to be deployed on individual hosts to configure any HBA adhering to this. However, HBAs with this API do not communicate over any sort of protocol to a central management console nor do they respond to any requests on either the FC or IP network without some sort of third-party agent running on the server and doing the communication.

HBAs conforming to the FDMI standard permit FDMI-compliant management software to perform remote HBA driver or firmware upgrades from a central console. However, for this feature to work, all HBAs must be in the same management zone, something not feasible or desired at every shop. Users also need to exercise caution before making any HBA firmware or driver changes. Server reboots generally are required, and these changes may also negatively impact the ability of these HBAs to communicate with different storage arrays and tape drives on the FC SAN.

The SMI-S standard takes HBA management one step further, giving storage administrators the ability to configure HBAs either out of band over IP networks or in band via FC fabrics. With SMI-S compliant HBAs, administrators can adjust discover and report settings, generate event logs, set attributes and perform firmware and driver upgrades subject to the same precautions outlined with FDMI.

For anything except single-administrator shops, users should only use any of these three standards for discovery, reporting and trapping events. With so many variables on each server, only administrators responsible for the individual servers should make any sort of changes to either the drivers or firmware. SMI-S 1.1 also includes several advanced features such as security, performance monitoring and policy management.

Premium-priced HBAs
Users looking at new or replacement premium HBAs for their high-end servers should plan on using 4Gb FC compatible HBAs. Frank Berry, QLogic's marketing director forecasts that 4Gb HBAs will cost about the same as 2Gb FC HBAs and be backward compatible with 1Gb and 2Gb SAN fabrics.

Fibre Down--the term for the placement of the FC HBA on the server motherboard--will also start to appear as blade servers gain greater acceptance and are increasingly deployed. In anticipation of this shift, QLogic has moved their FC HBA chip off of the HBA and onto the server motherboard. They already appear on motherboards that are used in BL Series 30 servers from Hewlett-Packard Co., X Series servers from IBM and Intel's Blade Servers.

The data integrity field (DIF) that ships as an optional component on some HBAs ensures that the data stored on the storage device is protected from the time it leaves the disk platter to the time it is placed into host application memory. Enabling this feature permits a DIF-enabled HBA to recognize if a data bit changes over time from a 0 to 1 due to degradation of storage media in the storage array or if a bit or bits were altered during transit. The DIF helps to detect data changing if such a bit flip occurs. The HBA vendors that provide this capability should adhere to the INCITS T10 standard, but exercise caution in implementing this feature. Depending on how the HBA vendor interpreted the T10 standard, it may be that one vendor's HBA DIF implementation may not be able to read another vendor's DIF protected data on the storage device.

For users who experience latency in their network, larger HBA data buffers will help overcome this issue. For example, Emulex's LP10000 provides users with 64 buffer-to-buffer credits and 256 KB of buffer memory. High-memory HBAs can initiate a FC SAN I/O request, store the pending write or read confirmations in memory while initiating the next I/O request. This becomes especially applicable to users copying data off site over long distances who want to keep I/O moving and who can wait longer for a read/write confirmation from the storage device.

The other feature users may want to watch is the introduction into servers of the new PCI-Express (PCI-E) bus architecture. PCI and PCI-X buses top out at 66 MHz and 133 MHz respectively while the PCI-E architecture jumps bus speeds up to 2.5 GHz. PCI-E also almost doubles the bandwidth of existing PCI-X from 132 MB/sec to 250 MB/sec. Each slot in the PCI-E bus will receive a full 250 MB/sec of bandwidth for itself unlike PCI-X buses that share the 132 MB of bandwidth. The initial implementations of this architecture will likely be for video intensive applications such as streaming media. For businesses doing primarily databases and file sharing, this will have little or no short term impact.

As FC SANs move out of data centers and into mainstream SMBs, HBA vendors are responding by incorporating the latest standards into their HBAs and making them easier to manage by working more closely with switch and storage vendors. Features like auto discovery of LUNs, installation and configuration wizards and industry standards all help to contribute to the value of the next generation of these devices.

Article 2 of 19

Dig Deeper on SAN technology and arrays

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All