Problem solve Get help with specific problems with your technologies, process and projects.

Flash PCIe SSD becoming shareable between physical servers

PCIe flash is great for IOPS and throughput performance gains with low latency. But they weren't shareable by other physical servers until now.

Flash PCIe SSDs, as well as SAS/SATA SSDs embedded in servers, have become increasingly popular because they dramatically increase performance in both IOPS and throughput with exceptionally low latency. This has led to their inclusion in most high-performance compute clusters, and increasingly in server and desktop virtualization environments.

The fit with virtualization has been somewhat of a mixed bag. These expensive SSDs have been limited to the physical servers where they are installed. In other words, they're not shareable by other physical servers. In addition, virtual servers and virtual desktops require shared storage for much of their desirable functionality.

The industry has made several gains in solving these virtualization problems. The cost per gigabyte of both PCIe and HDD form factor SSDs have dropped significantly. Most of that decrease is attributed to the switch from SLC to MLC and the smaller die size of the NAND chips. Smaller chips allows for greater capacities as well.

Solving the shared storage problem has spurred the development of caching software. The software caches active reads (although some will cache writes) and makes sure all of the writes are placed on the external shared storage. This enables utilization of the important virtualization features while trading off an unnoticeable nominal loss of application read acceleration.  

Still unsolved has been the ability to share those embedded-server SSDs with more than one server. That solution has now appeared. There are, in fact, two of them: QLogic Mt. Rainier and Sanbolic Melio 3.5.

QLogic Mt. Rainier

Mt. Rainier is a QLogic Fibre Channel host bus adapter card that has three different initial configurations (note: FCoE and iSCSI Ethernet Mt. Rainier adapter variations are planned):

  1. FC adapter with caching software plus custom separate SSD PCIe card for 25 W PCIe cards connected with a PCIe cable;
  2. FC adapter with the caching software, plus a SAS I/O daughter card that connects to standard HDD form factor SSDs embedded within the server; and
  3. FC adapter card with caching software plus a custom SSD daughter card that can be used for 50 W PCIe cards (only available to server OEMs).

Each of these configurations enables block storage, multiserver, SSD-transparent, OS-independent and shared cache. Clustered servers utilizing Mt. Rainier have the ability to create a single pool of high-performance SSD storage that looks, feels and manages like FC shared SAN storage. When Mt. Rainier is used as a cache, it can be mirrored between two Mt. Rainier server adapter cards in the cluster for high availability. This also facilitates greater performance with write-back caching.

In a nutshell, QLogic's Mt. Rainier resolves the problem of multiserver, embedded-SSD sharing, especially in a virtualized ecosystem.

However, there are significant tradeoffs.

Mt. Rainier cannot be used with anyone else's PCIe SSD. PCIe SSD cards by FusionIO, Virident, Micron, Samsung, OCZ, LSI, EMC, etc. will not work with Mt. Rainier. It can be used with other vendors' SAS or SATA SSDs, just not their PCIe versions. The Mt. Rainier adapters are Fibre Channel and block storage only today with the promise that there will be iSCSI and FCoE sometime in the future.

There is no promise for Mt. Rainier working over Infiniband or a file storage variation. General availability is scheduled for early 2013.

Sanbolic Melio

Melio is Sanbolic's distributed clustered file system. It is a pure software play that replaces the Windows -- and soon Linux -- file systems. Melio virtualizes the PCIe and/or SAS/SATA SSDs embedded in the servers. It does this by providing a software-defined storage virtualization layer as part of the hypervisor.

Each PCIe and SAS/SATA SSD in all of the Melio file system servers is pooled. Melio makes them look, feel and operate like a single shared file storage system. It works over fourteen data rate (56 Gbps), quad data rate (40 Gbps) and double data rate (20 Gbps) Infiniband, high-performance (10 Gbps or 40 Gbps) Converged Enhanced Ethernet (CEE), and even standard Ethernet. It virtualizes and pools any and all PCIe and SAS/SATA SSDs, as well as SAS/SATA HDDs.

Sanbolic also resolves multiserver, embedded-SSD sharing in a virtualized, nonvirtualized or mixed ecosystem.

But Melio, too, has tradeoffs. Windows (both physical and virtual machines) is the only OS currently supported, although Linux is imminent. Any other OS is not supported. Melio is both a file and block storage solution generally available today.

How to choose between Mt. Rainier and Melio

If the SSD sharing will primarily be over Fibre Channel, no PCIe SSDs have yet been purchased, or the server embedded SSDs are largely HDD form factor with SAS or SATA interfaces, then Mt. Rainier is a very good choice. But if the sharing storage network is Infiniband, Ethernet or converged Ethernet, then Melio is the technology of choice. It additionally becomes the default choice if the servers already have non-QLogic PCIe SSDs that need to be shared or requires any file sharing on the SSDs.

About the author:
Marc Staimer is the founder, senior analyst and CDS of Dragon Slayer Consulting in Beaverton, Ore. Marc can be reached at

Dig Deeper on Solid-state storage

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.