This article can also be found in the Premium Editorial Download "Storage magazine: Are your data storage costs too high?."
Download it now to read this article plus other related content.
There's a lot of reasons for storage managers to get excited about the improvements that the PCI-X bus can bring to storage area networks (SANs) by enabling faster host bus adapters (HBAs). We tested three PCI-X dual-channel, 2Gb HBAs to inspect their performance, management characteristics and understand how the HBAs would react when bus conditions weren't optimal. Our results show that your choice of board and the way you configure the server it goes into will both have a significant impact on your results.
The PCI-X bus provides speed, and SAN speed is critical. The data delivery chain starts at an application, and then data is delivered through the operating system to system hardware. From there, highest speed dedicated SANs get their data overFibre Channel (FC) to drives in various configurations. A traditional gating factor has been the speed of the host bus and its transfer characteristics.
The PCI-X bus takes the PCI bus from 32 to 64 bits wide. It was also designed to deliver 133MHz speeds, while being backwards and plug compatible with the older PCI bus speeds of 33MHz and 66MHz. So the current maximum throughput is 1GB/s, but there's a gotcha. The PCI-X bus is meant to be backward compatible with PCI adapters, but not at maximum performance levels.
We tested each board to see how far the bus would fall back in speed by introducing a PCI bus HBA into the PCI bus-based test bed. In all cases, the message is clear: You must use all PCI-X cards in a PCI bus system--unless a vendor supports totally different buses--or any gains associated with a PCI-X HBA will be thwarted.
The PCI-X bus must slow down to match the highest common denominator speed of adapters connected to the bus. On a good day, that means that a PCI-X adapter must slow down to 66MHz to match the highest common denominator speed of the PCI card connected to the bus. An older PCI bus card would slow the overall speed of the bus down still further. This characteristic was the same for all three cards tested. And keep in mind that a non-HBA PCI card--such as a network interface card--will have the same effect on the PCI-X HBA.
Hardware in our test environment was a Compaq DL580 (see "Lab Notes") through three vendors PCI-X 2Gb/s, 2-port HBAs (and an additional single channel adapter) to a JMR Flora drive array-all tested in several configurations. We chose the HBA vendors--Emulex, Costa Mesa, CA, LSI Logic, Milpitas, CA, and JNI, San Diego, CA--based on the breadth of their inclusion in compatibility matrixes from the major switch and disk subsystem vendors. We had intended to include QLogic HBAs for the same reason, but the company declined to participate. We did, however, test a new, single-channel Emulex board as our fourth option.
We tested all three vendor's boards on Windows 2000 and SuSE Linux 8.1. Each HBA vendor also supports a matrix of other operations systems, with Solaris as the most popular "other."
The test that we used was Intel's IOMeter, which is available on both Windows and Linux platforms. However, the open-source Linux version of Iometer isn't yet a 1.0 release. We were able to get high testing repeatability with the Windows version, but the Linux version didn't give us results within our desired 5% range on two of the three boards tested. We therefore performed most tests using Windows 2000 Advanced Server as our reference platform, and our comments are based on Windows 2000 use.
Common to all of the HBAs tested was the ability to perform many automated sensing chores. Each HBA tested also has support of the IP over the adapter , but we did not test IP features, only SCSI. We also weren't brave enough to test PCI-X hot-pluggable features of the HBAs.
This was first published in December 2002