What you'll learn in this tip: With a PCI Express (PCIe) solid-state drive (SSD), the storage network can be eliminated...
entirely in certain situations. Learn about the considerations and challenges of PCIe SSD and if it's right for your environment.
Solid-state storage based on NAND flash memory chips has drawn much attention in the last year, from traditional enterprise data storage vendors to less familiar names and newcomers. The message from storage vendors has been to deploy solid-state storage alongside disks in their arrays, with specialized software handling the migration of data to and from this high-performance tier. But newcomers and system vendors offer an alternative approach: solid-state storage deployed as a PCI Express card within the server itself. The PCIe approach eliminates the storage network entirely in certain situations. This tech tip focuses on the reality of the PCIe SSD market and where these devices should be deployed in today's systems.
Contrasting PCIe SSD and networked storage
Enterprise storage has slowly evolved from internal disks to direct-attached (DAS) RAID to networked arrays (SAN and NAS). Each step has maintained backward-compatibility with what went before, allowing applications to be deployed on SAN or NAS without major changes from DAS.
The technologies employed for block storage -- Fibre Channel (FC) and iSCSI -- rely on the SCSI protocol and drivers just like internal disk drives do, but with different results. Modern storage systems might use Ethernet adapters and switches, and can communicate with devices that are distant in terms of geography. Most storage systems are also virtualized, disguising their complex arrangement of caches and mobility. All of this effort goes to balance flexibility and performance, but it places an upper limit on storage performance.
PCI-based storage is entirely different. Rather than masquerading flash or DRAM memory as a SCSI-connected hard disk drive, PCI Express SSD products often use specialized drivers to communicate using direct memory access (DMA) over the PCI bus. This is game-changing in terms of I/O latency, enabling random read and write performance that's orders of magnitude faster than the quickest storage array. Although throughput is also improved thanks to the bandwidth of the PCI Express bus and memory, the expense of solid-state chips limits the amount of capacity that can be deployed.
Where to deploy PCIe SSD today
Enterprise systems architects face a wide variety of challenges, with each application or component placing unique demands on data storage subsystems. Some require massive storage capacity while others must constantly move vast amounts of data. Neither of these is appropriate for PCIe SSD at this point because of the high per-GB cost of SSD and the limited connectivity of the PCI Express bus.
Instead, architects should consider deploying PCI Express SSD in servers that demand extremely low storage latency or applications generating massive amounts of random read and write operations. The expense and difficulty of integrating these devices requires a careful examination of the various servers that make up critical applications. Consider investing in an application performance monitoring (APM) software suite to characterize application bottlenecks and identify the optimum locations for these cards.
It's too simple to say that databases are appropriate for PCIe SSDs because the performance profile of database-driven applications varies greatly. This is one product that requires a deeper knowledge of applications, so a sit-down with database and application managers is in order. Consider non-traditional applications as well: PCIe SSDs have found success in web applications and creative workstations, not just database servers.
PCI Express SSD implementation challenges, considerations
As PCI Express devices, these SSDs require an empty slot inside the server as well as an outage window for installation and maintenance. This can be problematic for mission-critical applications, but most will have some opportunity for installation.
Blade server users face special challenges when it comes to PCIe SSDs. Dedicated mezzanine SSDs exist for Hewlett-Packard (HP) Co.'s c-Class blade chassis, but it's more difficult to install them in other blade servers. Many vendors sell PCI Express expansion chassis, and companies like Aprius Inc. and Xsigo Systems Inc. enable these to be shared; however, these impact the performance of a PCIe SSD to an extent.
These devices are also expensive, though perhaps not when compared to a high-performance enterprise storage infrastructure. Because they're PCI Express devices and require special operating system-specific drivers, a PCIe SSD can't be easily shared with other servers. Such a card will be of great benefit to the server it's installed in and those that rely on its I/O processing abilities. But this investment can't be spread among a group of servers, and any excess capacity will go unused.
The future of PCIe SSD
PCIe SSD is an entirely new category of storage device, delivering unprecedented random I/O performance right inside critical servers. The rapid growth of sales at companies like Fusion-io, LSI Corp., Texas Memory Systems and others indicates that there are many buyers looking for this kind of performance.
These devices should be deployed as point solutions to specific performance demands. Use application performance monitoring software to determine if you have an I/O bottleneck and consult with database and application managers to decide whether a PCIe SSD is appropriate for their needs. As you can see, both the devices themselves and their use case are entirely new for enterprise storage managers.
BIO: Stephen Foskett is an independent consultant and author specializing in enterprise storage and cloud computing. He is responsible for Gestalt IT, a community of independent IT thought leaders, and organizes their Tech Field Day events. He can be found online at GestaltIT.com, FoskettS.net and on Twitter at @SFoskett.