QLogic Corp. today announced a new host bus adapter (HBA) technology that will allow better management and sharing of server-side flash solid-state storage caching resources across connected servers. QLogic's Mt. Rainier HBA looks like a typical adapter to the server operating system, but it combines the HBA driver, solid-state disk (SSD) driver and filter driver on one card.
Mt. Rainier puts caching and SSD data management on the server card. QLogic expects the adapter to begin shipping in early 2013, according to the company's VP of corporate marketing, Chris Humphrey.
Humphrey said legacy storage systems can't keep up with multi-core servers running numerous virtual machines and applications. This increasingly disparate latency rift is known as the I/O performance gap. "If your applications run slow, you're losing money," he said. "You really want to make those applications run faster and do more work."
Several vendors, most notably Fusion-IO, extend the storage system by placing SSDs or solid-state PCI Express (PCIe) cards directly into servers to improve I/O performance. Humphrey maintains that limiting expensive solid-state storage to individual servers eliminates many of the benefits of a shared infrastructure, especially resource and capacity management.
According to Andrew Reichman, a principal analyst at Forrester Research, those are the downsides of server-side flash caching technology. "You can get [SSD technology into the server], and you get a very fast boost in performance, but it's hard to use that capacity efficiently, have good management tools and integrate it with your storage management framework," he said.
Reichman added that the Mt. Rainier project is a hybrid approach to narrowing the I/O performance gap. "You get the performance benefits of the SSD capacity on the server, but you get the global management benefits of having your HBA and storage network management tools be able to see that SSD and be able to control it."
The Mt. Rainier HBA can be attached to a server-side PCIe SSD card with a PCIe cable between the cards, to industry standard 2.5-inch SSD drives using a SAS I/O daughter card, or in a single PCIe-slot configuration connected to a daughter card that has integrated SSD flash.
"Everything remains transparent -- the same management software, the same applications and the same multipathing software that you have in your server," Humphrey said. "Since it sees the Mt. Rainier card as a standard adapter, you don't have to modify anything."
The first Mt. Rainier cards will be for Fibre Channel (FC) SANs, but Humphrey said 10 Gigabit Ethernet card is on the roadmap.
Mt. Rainier cards will be able to see and communicate with each other on a connected network, allowing for the creation of mirrored caches, and shared caches and data pools across servers.
The shared storage pools can then be carved up into smaller LUNs. The HBAs, however, cannot stripe data across multiple server-side flash resources within connected servers.
Forrester's Reichman said that the Mt. Rainier HBA could be a big boost to QLogic's adapter card business, which is mainly a FC business despite the vendor's addition of more protocols to its cards in recent years. "The world of HBAs and Fibre Channel switching is not that differentiated. What this could do for QLogic is make it more differentiated and have more capabilities outside of the core functionality."