This article can also be found in the Premium Editorial Download "Storage magazine: The benefits of storage on demand."
Download it now to read this article plus other related content.
|Are SMB HBAs good enough?|
|Existing host bus adapters (HBAs) offer|
| a number of configuration options such as context switching, data buffers and data integrity fields. Yet some HBA vendors project that at most only about 50% of the users take advantage of any of these functions. Field engineers are even more pessimistic, estimating that fewer than 10% use these features.
As HBA vendors pare prices by stripping out infrequently used features, storage administrators need to ask if the next generation of less-costly HBAs will be good enough. The short answer is yes. The following questions will help you decide if these new HBAs will fit in your environment.
Is the HBA going into either a Windows or Linux server? If so, then an SMB HBA will likely meet the need. Exceptions may include where a server hosts a performance-intensive application or accesses storage at a remote site where latency comes into play.
Does your environment require storage vendor certification for support? If yes, steer clear of SMB HBAs. Much of the cost of premium HBAs today is tied to the interoperability testing they go through.
How large will your storage area network (SAN) environment be? While HBA vendors recommend deploying SMB HBAs into environments of 10 servers or fewer, their estimates usually tend to be conservative. Users will likely find that these HBAs will work satisfactorily in SANs with server counts of up to 20 or more.
Do you have other flavors of Unix in your environment? Environments running AIX, HP-UX, Sun Solaris or any others should avoid SMB-targeted HBAs for now, especially environments with enterprise and midsize servers.
Is it safe to buy right away? Yes. The new lower-cost HBAs will closely resemble--if not be identical to--the ones shipping now and the drivers will most likely be the same ones used in enterprises. Also, many of the hassles associated with Fibre Channel SANs and HBAs have been worked out and should be minimal in small environments.
LSI Logic's new MyStorage management software combines wizard-guided installation and path failover capabilities in one tool. The MyStorage software also lets you associate a specific storage device with a drive letter and remembers that mapping. This feature prevents storage devices from unexpectedly moving around within the operating system and prevents confusion or even application downtime. If storage devices are either added or removed during a server reboot, an application expecting data on drive "F" may now need to look on either drive "E" or "G" after the reboot.
While LSI Logic has yet to announce an HBA targeted at the SMB market, they plan to include all of the features found in their higher end cards in new HBAs at a lower price. Charles Krause, LSI Logic's HBA division director, says that now most of the cost of HBAs is not the components, but the cost of certification testing and document maintenance. Krause says that LSI Logic's new HBAs will be tested in more limited SAN environments with less complex configurations. For example, LSI Logic already offers their dual-ported FC LSI 7202P HBA for under $500 as an optional component for the Apple Power Mac G5, but qualification testing is limited to SAN components typically found in Mac OS shops.
As these new products are released, users should make sure that the HBA drivers and management software provide all of the features and functions they need. If you're looking to use two FC paths to disk, be sure the HBA driver supports your specific operating system as well as host-based LUN mapping, HBA failover and load balancing. If you want to be able to upgrade drivers and firmware remotely, make sure you understand how the different vendors accomplish these tasks, as some opt to do this over the FC network while others use the traditional IP network. (See "Are SMB HBAs good enough?")
The standardization of HBAs
Industry standards will simultaneously simplify HBA management and drive down HBA prices. FDMI and SMI-S 1.0.2 standards are already ratified and HBAs and routers are next in line for SMI-S compliance testing with results expected to be released in August or September.
Most major HBA vendors adhere to the SNIA HBA API 2.0, the oldest and most mature HBA standard. This "C" language API standard definition permits HBA software management tools to be deployed on individual hosts to configure any HBA adhering to this. However, HBAs with this API do not communicate over any sort of protocol to a central management console nor do they respond to any requests on either the FC or IP network without some sort of third-party agent running on the server and doing the communication.
HBAs conforming to the FDMI standard permit FDMI-compliant management software to perform remote HBA driver or firmware upgrades from a central console. However, for this feature to work, all HBAs must be in the same management zone, something not feasible or desired at every shop. Users also need to exercise caution before making any HBA firmware or driver changes. Server reboots generally are required, and these changes may also negatively impact the ability of these HBAs to communicate with different storage arrays and tape drives on the FC SAN.
The SMI-S standard takes HBA management one step further, giving storage administrators the ability to configure HBAs either out of band over IP networks or in band via FC fabrics. With SMI-S compliant HBAs, administrators can adjust discover and report settings, generate event logs, set attributes and perform firmware and driver upgrades subject to the same precautions outlined with FDMI.
For anything except single-administrator shops, users should only use any of these three standards for discovery, reporting and trapping events. With so many variables on each server, only administrators responsible for the individual servers should make any sort of changes to either the drivers or firmware. SMI-S 1.1 also includes several advanced features such as security, performance monitoring and policy management.
Users looking at new or replacement premium HBAs for their high-end servers should plan on using 4Gb FC compatible HBAs. Frank Berry, QLogic's marketing director forecasts that 4Gb HBAs will cost about the same as 2Gb FC HBAs and be backward compatible with 1Gb and 2Gb SAN fabrics.
Fibre Down--the term for the placement of the FC HBA on the server motherboard--will also start to appear as blade servers gain greater acceptance and are increasingly deployed. In anticipation of this shift, QLogic has moved their FC HBA chip off of the HBA and onto the server motherboard. They already appear on motherboards that are used in BL Series 30 servers from Hewlett-Packard Co., X Series servers from IBM and Intel's Blade Servers.
The data integrity field (DIF) that ships as an optional component on some HBAs ensures that the data stored on the storage device is protected from the time it leaves the disk platter to the time it is placed into host application memory. Enabling this feature permits a DIF-enabled HBA to recognize if a data bit changes over time from a 0 to 1 due to degradation of storage media in the storage array or if a bit or bits were altered during transit. The DIF helps to detect data changing if such a bit flip occurs. The HBA vendors that provide this capability should adhere to the INCITS T10 standard, but exercise caution in implementing this feature. Depending on how the HBA vendor interpreted the T10 standard, it may be that one vendor's HBA DIF implementation may not be able to read another vendor's DIF protected data on the storage device.
For users who experience latency in their network, larger HBA data buffers will help overcome this issue. For example, Emulex's LP10000 provides users with 64 buffer-to-buffer credits and 256 KB of buffer memory. High-memory HBAs can initiate a FC SAN I/O request, store the pending write or read confirmations in memory while initiating the next I/O request. This becomes especially applicable to users copying data off site over long distances who want to keep I/O moving and who can wait longer for a read/write confirmation from the storage device.
The other feature users may want to watch is the introduction into servers of the new PCI-Express (PCI-E) bus architecture. PCI and PCI-X buses top out at 66 MHz and 133 MHz respectively while the PCI-E architecture jumps bus speeds up to 2.5 GHz. PCI-E also almost doubles the bandwidth of existing PCI-X from 132 MB/sec to 250 MB/sec. Each slot in the PCI-E bus will receive a full 250 MB/sec of bandwidth for itself unlike PCI-X buses that share the 132 MB of bandwidth. The initial implementations of this architecture will likely be for video intensive applications such as streaming media. For businesses doing primarily databases and file sharing, this will have little or no short term impact.
As FC SANs move out of data centers and into mainstream SMBs, HBA vendors are responding by incorporating the latest standards into their HBAs and making them easier to manage by working more closely with switch and storage vendors. Features like auto discovery of LUNs, installation and configuration wizards and industry standards all help to contribute to the value of the next generation of these devices.
This was first published in June 2004