| Vendors are starting to add solid-state storage to their arrays, but it isn't as simple as replacing enterprise drives with solid-state memory.
EMC Corp.'s recently added support for solid-state disks (SSDs) in its Symmetrix product line is a clear sign that solid-state storage is transitioning from an exotic specialty item to a mainstream feature. Yet most enterprise array vendors aren't shipping solid-state storage products; this indicates that adding an SSD-based Tier 0 to an existing array isn't as simple as replacing traditional hard disks with solid-state drives. And while pricing for solid-state memory continues to fall, the cost of enterprise NAND flash is still much higher than that of the fastest Fibre Channel (FC) enterprise disks.
Why the increased interest in solid state by array vendors? Past solid-state offerings were mostly DRAM based, but high prices, limited capacity and the volatile nature of DRAM made these systems very expensive. Until a few years ago, the same was true for slower flash memory.
Also fueling the popularity of NAND flash memory are lower prices for the slower, less-reliable and higher capacity consumer-level MLC flash, as well as for the more powerful, lower capacity, single-level cell (SLC) NAND flash typically used in enterprise storage systems. Although enterprise-level, flash-based SSDs are still approximately 20 times more expensive than comparably sized high-end FC drives, the price erosion of NAND flash occurs on a significantly steeper ramp than it does for high-end disk drives. From a dollar-per-I/O perspective, solid-state drives are more cost-effective than disk drives, where equivalent performance can only be achieved through a very large number of spindles. "Within the next three to five years, solid-state storage will be a standard feature for most business implementations," says Mark Peters, an analyst at Enterprise Strategy Group (ESG), Milford, MA.
The use of Tier 0
Mission-critical apps like transaction processing and database systems linked directly to a firm's success are prime examples of where solid-state storage is used today. Exchange Server databases and SQL databases are other candidates for solid-state storage. "To increase both performance and the number of users per Exchange server, we're currently evaluating solid-state drives in our 10Gigabit Ethernet Nimbus [Data Systems Inc.] array," says Aaron Martin, IT manager at Loro Piana, a luxury goods manufacturer in New York City. Martin also plans to host virtual server boot images and virtual server working directories on solid-state storage.
As SSD and hard disk prices converge, the need for high-end disk drives will become increasingly questionable. The size of a solid-state Tier 0 is likely to grow and the high-end disk drive Tier 1 is likely to shrink reciprocally; it's very possible that solid-state drives will eventually replace high-end disk drives. "Storage systems will consist of two main tiers: a high-performance solid-state tier and a large capacity, lower performance, low-cost SATA or SATA equivalent tier," says Rick Gillett, VP of data systems architecture at F5 Networks Inc.
Solid-state storage also consumes significantly less power for the same number of operations. "Generally speaking, solid-state storage can perform about 1,000 operations per watt compared to five operations per watt in high-end disk drives," says Greg Schulz, founder and senior analyst at StorageIO Group, Stillwater, MN.
Solid-state storage challenges
Reliability: NAND flash has been hampered by concerns related to wear out. A flash-memory cell permits only a finite number of writes before it becomes unusable. "100,000 live cycles for an SLC flash cell is quite typical, although in reality you are likely to get a significantly higher number of writes before a cell wears out," says Bob Wambach, EMC's senior director of product marketing for Symmetrix. This is an order of magnitude above the number of writes cited for consumer-level MLC flash, but it's still limited.
While 100,000 writes per flash cell appears to be small, flash drive vendors like STEC Inc. have been able to warrant their drives for three-plus years with better MTBF specs (2 million to 4 million hours) than enterprise hard drives (typically 1 million hours) through a series of techniques. From front-ending the flash storage with a small DRAM cache, using wear-leveling algorithms that evenly distribute writes across blocks of cells to sophisticated bad-block management techniques and continuous proactive drive monitoring, NAND flash media is able to meet enterprise requirements. Additionally, storage array vendors are deploying solid-state drives in RAID configurations to further reduce the probability of data loss.
Read-write performance gap: The substantially slower write performance of flash cells vs. reads has been another area of concern. The limitations in a flash cell's lifecycle and its slow write performance can be attributed to the way flash cells are written: they're accessed in blocks of cells. To write to a block, cells in the block need to be opened, existing content needs to be erased and cells need to be closed. This adds significant overhead when writing and updating data.
Through techniques such as the use of a small DRAM cache in the drive, enterprise-level flash drive vendors have been trying to close the performance gap between reads and writes. "Our enterprise-level Zeus drives support 18,000 random write IOPS and 52,000 random read IOPS," reports Pat Wilkinson, STEC's VP of marketing and business development. This is several orders of magnitude above the few hundred IOPS supported by high-end disk drives, where IOPS can only be scaled by increasing the number of spindles (see "Performance comparison," below).
Array interoperability issues: Array vendors need to ensure that the solid-state option doesn't adversely impact their array's reliability and performance. The array must be able to deal with the high performance of solid-state drives, which will likely push the array to its limits. From replication and mirroring to thin provisioning, all features need to work with the solid-state option in place. Most importantly, vendors need to ensure that their array architecture can cope with the peculiarities of NAND flash. A case in point for the latter is NetApp and the Write Anywhere File Layout (WAFL). WAFL was designed to reduce disk head movements and eliminate random writes. For that, WAFL continuously moves data in an attempt to serialize access. This makes it more difficult for NetApp to just replace disk drives with flash drives, as flash drives would wear out much faster than in more traditional storage arrays where data on the spindles is more static. While NetApp won't comment on it, this is likely one of the reasons why the company opted for a cache-based, solid-state option for its first-generation solid-state offering rather than replacing disk drives with solid-state drives.
Solid-state storage is finding its way into storage systems in a variety of ways:
Solid-state disks that replace hard disks are the easiest way of adding solid-state storage to an existing array. "The biggest challenge in adding solid-state storage to the Symmetrix array family was ensuring seamless integration between the solid-state tier and other tiers, and making sure that all features continue to work flawlessly," says EMC's Wambach.
First-generation solid-state implementations like the EMC offering add SSD without changes to the array architecture. Because arrays were designed for hard disk performance, array controllers are becoming the bottlenecks. Even high-end arrays like the Symmetrix DMX-4 could be pushed against its performance limits if too many solid-state drives were added. Therefore, array sizing and putting the right number of solid-state drives into the array is instrumental to ensure predictable overall array performance. "As solid-state storage becomes more prevalent, array vendors will redesign their arrays to be able to better cope with the high performance of solid-state drives," explains IBM's Barrera.
Solid-state storage as cache is the architecture chosen by NetApp in its first generation solid-state offering. "Solid-state storage is challenging and unproven in enterprise storage systems," says Chris Bennett, NetApp's VP of core systems. "Therefore, we decided to go with a more conservative approach and use solid-state storage as cache only."
More specifically, NetApp will use solid-state memory to cache meta data. By storing a copy of the meta data on a NAND flash PCI Express card in the storage controller, meta data can be accessed at memory speed and with data only needing to be fetched from disk drives. The result is a significant performance boost. "By accessing meta data from solid-state memory, we're seeing a 40% performance gain for applications like Exchange," reports Bennett.
Like NetApp, Gear6 sees the near- to mid-term role of solid-state storage as cache rather than a disk drive replacement. "Using memory as a persistent storage device is more trouble than it's worth because of very different management requirements," says Gary Orenstein, Gear6's VP of marketing. The firm's Cachefx appliance provides a pool of DRAM or NAND flash that sits between clients and NAS devices to speed file access. Cachefx is accessed like a NAS device, except it currently supports only the NFS file-system protocol.
Dedicated solid-state storage systems from the likes of Texas Memory Systems (TMS) Inc. are another way of bringing solid-state storage into the data center. Instead of enhancing existing arrays with solid-state storage, they plug into existing SANs and are accessed like traditional disk arrays. This is a very clean way of adding a Tier 0, as it eliminates the need to tamper with disk-based arrays. Depending on performance requirements, dedicated solid-state systems like TMS' RamSan family are available in DRAM and NAND flash configurations, giving customers more performance configuration options. Unlike disk array vendors with nascent solid-state offerings, vendors like TMS have a long history of selling solid-state-based storage systems. On the downside, a dedicated solid-state system requires managing another system with its own management tools. Moreover, these systems won't be able to leverage the resilience and features available in high-end disk arrays. And as NAND flash prices decline, adding solid-state storage to an existing array will become more cost-effective than acquiring a separate storage system.
EMC and smaller firms like Nimbus Data Systems have been the first to ship products with a solid-state Tier 0. Other array vendors haven't committed to a solid-state offering, but are keeping watch. "Hitachi Data Systems is currently exploring support for SSD drives," says Roberto Basilio, the firm's senior director of enterprise storage product management. Hewlett-Packard Co. plans to offer solid-state storage for its StorageWorks XP24000 array (an OEM product from Hitachi) within the next nine to 18 months, says Patrick Eitenbichler, director of marketing for the StorageWorks Division. And "IBM intends to ship a Tier 0 for its System Storage DS8000 high-end arrays soon," says Barrera.
In the next two to three years, Tier 0 solid-state storage will become a standard array option, but it will come at a premium price and users will deploy it selectively. In the longer term, as $/gigabyte pricing for NAND flash decreases, solid-state drives will begin to replace high-end disk drives, but this isn't likely to happen for at least another five years.