Solid-state storage update

Solid-state storage is still mostly for well-heeled shops with power-hungry apps, but new developments could bring solid state down to earth soon.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: The lowdown on solid-state storage:

Solid-state storage is still mostly for well-heeled shops with power-hungry apps, but new developments could bring solid-state down to earth soon.

Solid-state storage received a big boost in 2009, with a large majority of storage vendors adding solid-state drive (SSD) options to their product lists. As a result, we've seen a sharp increase in the total number of enterprise-grade SSD components traded. A meager 59,000 units were sold worldwide in 2008, according to Stamford, Conn.-based Gartner Inc., but the total is expected to reach 5.1 million units and $2 billion in revenue by 2013. Although the price for NAND flash has come down by approximately 30% since last year -- with expectations that it will continue to decline annually at that rate -- it's still an order of magnitude more expensive than high-end disk drives. Because of its premium price, customers continue to deploy NAND flash judiciously, mostly for applications adverse to latency and requiring a high number of IOPs; in the past, expensive bulky arrays with a large number of spindles were the only alternative.

That's where solid-state storage shines today: A single enclosure of SSDs can displace a rack of high-end Fibre Channel (FC) drives at an overall lower cost, provide better performance, require significantly less power and space, and greatly reduce data center and operational complexity. Solid-state drives can also supplement disk arrays with a small amount of solid-state storage for frequently accessed data to boost array and application performance. OLTP apps like SAP and Oracle ERP, databases, email servers, high-transaction websites and even virtualization platforms are the great beneficiaries of solid-state storage. Whenever hard disk I/O and latency become the limiting factors, solid-state storage is an alternative. Conversely, and as a result of the high per-gigabyte cost of SSD, whenever large capacity is needed, hard disks continue to be the storage medium of choice.

Even though solid-state storage can be implemented with DRAM, NAND flash and other memory technologies, NAND flash is the prevailing solid-state drive memory technology in use today. In addition to non-volatile memory, enterprise-grade SSD products typically come with a small amount of DRAM that acts as write-buffer and cache, a controller with storage interfaces (FC, SATA or SAS) and software. Today, it's mostly the intelligence and proprietary algorithms in controllers that overcome the limitations of NAND flash, making it viable in the enterprise space. "Because of its better controller technology and algorithms to manage NAND flash, STEC [Inc.] has by far the largest number of design wins in the enterprise storage space today," said Joe Unsworth, research director in Gartner's Technology and Service Provider Group.

Glossary of SSD terms

 

Solid-state drives (SSDs): SSDs use memory chips, mostly non-volatile NAND flash, instead of rotating platters for data storage. The benefits of low latency, low power consumption and higher resilience compared to disk drives are a result of not having any mechanical parts.

Flash memory: Flash is non-volatile, rewritable memory. Unlike DRAM, it requires erasing blocks of data before they can be written to, resulting in a lower write than read performance. Depending on the technology, flash memory supports only a finite number of writes. Although flash memory is available as NOR or NAND flash, SSD products use NAND flash because it's more durable, less expensive, its cells are denser, and writing and erasing are quicker compared to NOR flash.

Single-Level Cell (SLC): SLC NAND flash stores one bit per cell. Because of its high endurance (approximately 100,000 writes per cell) and cost, SLC is predominantly used in enterprise-grade SSD offerings.

Multi-Level Cell (MLC): MLC NAND flash uses two bits per cell. With about one-tenth of the endurance of SLC NAND flash and a fraction of the cost of SLC flash, MLC is mostly used in consumer products. Newer 3-bit per cell (1,000 to 5,000 supported writes) and 4-bit per cell (a few hundred supported writes) NAND flash are targeted for applications with a very limited number of writes.

 

Challenges of NAND flash and SSDs

Unlike DRAM, NAND flash is non-volatile, capable of retaining data as hard disks do, but without depending on vulnerable mechanical parts and requiring significantly less power. But these benefits are offset by shortcomings the storage industry has been attempting to address for several years:

  • Durability issues with NAND flash
  • Low write performance of NAND flash
  • Inadequate software to efficiently support solid-state drives
  • Architectural shortcomings of storage systems that have been designed for mechanical disks

Durability issues of NAND flash

The most severe issue with NAND flash is the wear-out of cells, which limits a cell's life span to a very finite number of writes. While consumer-grade multi-level cell (MLC) flash permits approximately 10,000 writes per cell, enterprise-grade single-level cell (SLC) flash supports about 100,000 writes per cell before becoming unusable. The wear-out problem worsens as density increases. The 10,000 number of supported writes of 2-bit per cell MLC flash used in consumer-grade products looks generous vs. the newer 3-bit per cell offerings with their 1,000 to 5,000 supported write cycles and the few hundred supported writes of 4-bit per cell flash. The data storage industry has been contending with this simple rule of NAND flash: as density increases, both cost and durability decrease.

With SLC NAND flash now capable of meeting enterprise storage requirements and accepted in the enterprise space, storage vendors are trying to further decrease its cost by bringing MLC flash into the enterprise realm. Specifically, they're looking to use 2-bit per cell multi-level cell flash to compete with single-level cell flash, and 3-bit and 4-bit per cell flash for read-intense applications with scant write requirements, such as data archival.

"It's not a question if MLC flash can be used in enterprise storage systems, but a question of what it takes to make it happen at an acceptable cost," said Mark Peters, an analyst at Enterprise Strategy Group (ESG) in Milford, Mass. There are already a few instances where MLC flash is used in the enterprise space today. The most prominent example is the Hewlett-Packard (HP) Co. StorageWorks IO Accelerator for HP BladeSystem c-Class, a direct-attached, solid-state storage array mezzanine card; the HP product is based on Fusion-io's ioDrive, and uses both SLC and MLC flash depending on capacity.

Enterprise-grade solid-state drive vendors have employed a variety of techniques that enable their products to match and even exceed the life span and durability of mechanical disk drives. With SSD drives warranted for three to five years -- depending on the SSD vendor -- and mean time between failures (MTBF) north of 1 million, enterprise-grade SSD drives are at least as durable as high-end disk drives.

"By now, we consider SSD drives as reliable as high-end FC drives," noted Claus Mikkelsen, chief technology officer, storage architectures at Hitachi Data Systems. To attain this degree of durability, sophisticated wear-level algorithms that reduce the number of writes and distribute writes evenly among flash cells have been devised and implemented in solid-state drive controllers. The use of spare capacity, which typically ranges from 20% to 100% of usable capacity, extends the life span of SSDs by reducing the number of times cells are written to during a given time period and providing the extra capacity to replace defunct cells. Compression and data deduplication algorithms are used to maximize efficiency and reduce the number of writes per cell. And similar to high-end mechanical disks, enhanced error-correction algorithms are used to find, fix and isolate bad blocks. "Error-correction codes [ECCs] used to occupy four or five bits per 512 byte block; now six to eight bits are common and we're seeing it move to 12 bits," Gartner's Unsworth explained.

Low write performance of NAND flash

The other severe handicap of NAND flash is its slanted read-write performance ratio (see "NAND flash solid-state drives vs. disk," below). While enterprise SSDs are capable of delivering a sustained read performance greater than 40,000 I/Os per second, write performance typically lags by a factor of three or four. This discrepancy is caused by NAND flash's requirement to erase blocks before they can be written, adding substantial overhead. That's also the reason why NAND flash storage shows significantly higher write performance as long as erased cells are available, but declines by a factor of two to three thereafter.

NAND flash solid-state drives vs. disk

 

  NAND flash SSD Disk drive
I/O per second (sustained) Read: 45,000+
Write: 15,000+
Few hundred
Latency in milliseconds Read: 0.2+
Write: < 1
4+
Cost/GB High Low
Cost/IOPS Low High
Resilience High Lower, because of mechanical components
Power consumption Low Higher

 

"Since NAND flash-based SSD products show great write performance the first 15 to 20 minutes, it's pertinent to compare their sustainable performance rather than their inflated burst performance," cautioned Woody Hutsell, president at Texas Memory Systems.

The STEC Zeus IOPS solid-state drive with its maximum 52,000 sustainable read IOPS and 17,000 write IOPS, according to the company, currently dominates in the enterprise storage space and has established a baseline that other SSD offerings are compared to. "With the latest STEC drives, write performance is enterprise ready, but clearly not on par with read performance," said Kyle Fitze, marketing director, HP StorageWorks Storage Platforms Division. Unfortunately, no independent third-party tests for enterprise-grade SSD products are available at this point and, as a result, performance numbers cited by vendors should be taken with a grain of salt.

To overcome the read-write performance gap, most vendors are deploying a small DRAM cache that acts as a write buffer; that is, data is written to the cache first and then to NAND flash. "A DRAM write buffer doesn't quite get you to read performance, but it gets you closer," noted Clod Barrera, chief technical strategist for IBM System Storage. Even though DRAM helps close the gap, it adds the complexity of having to back up the volatile data in cache in case of a power failure and consumes valuable real estate. "As we move to 1.8-inch drives and custom form factors, you want to enable the highest density, and DRAM clearly is prohibitive," said Thad Omura, vice president of marketing at SandForce Inc., a developer of SSD processors.

Because of these shortcomings and the inability to completely make up for the write handicap of NAND flash, newer and more innovative solid-state drive developments shun the DRAM write buffer approach. Pliant Technology Inc. claims to achieve more than 100,000 IOPS for both reads and writes. Commercially available, the Texas Memory Systems RamSan-620 is capable of sustaining 250,000 IOPS for both reads and writes, according to the firm's Hutsell. Similar to the RamSan-620 and Pliant Technology's Enterprise Flash Drives (EFDs), SandForce's SF-1000 family of SSD processors, which interface to both MLC and SLC drives, forego a DRAM write buffer. All three vendors emphasize the role of parallelization as key to overcoming the write performance gap and an overall increase in the number of supported IOPS.

"It's our custom parallel-processor architected ASIC that enables us to perform many of the housekeeping tasks, such as pre-erasing of unused blocks, concurrently, and it enables us to get write performance in line with read performance," explained Greg Goelz, vice president of marketing at Pliant Technology.

Inadequate software support for SSD

While significant progress has been made to overcome or at least mitigate the issues related to NAND flash, software support to manage and efficiently take advantage of solid-state storage has evolved at a much slower pace, becoming one of the primary obstacles to more rapid enterprise adoption of SSDs. To counteract the prohibitive effect of the high price of solid-state drives, storage systems need to be able to maximize the use of SSDs by automatically and transparently shuffling data between the fast SSD tier and slower disk tiers. While most storage vendors acknowledge the need and relevance of policy-based data migration between the fast but expensive SSD tier and disk tiers, keeping frequently accessed data in solid-state storage and more static data on disks, only a few can offer an automated solution today.

Leading the pack is Compellent Technologies Inc.'s Storage Center storage-area network (SAN). Its Dynamic Block Architecture tracks the characteristics and usage of every data block; this metadata information is leveraged by the product's Data Progression feature, which automatically moves data from SSDs to disk tiers and vice versa based on how often blocks are accessed.

"Our Data Progression is the killer app for SSD because users can add drives to existing systems and then let automation take over," said Bob Fine, Compellent's director of product marketing. Contrary to Compellent, the majority of enterprise storage vendors depend on a manual two-way process for migrating data between solid-state drives and disk tiers, first analyzing I/O activity and, in a second step, migrating data to the appropriate tier. Depending on a manual process for now, EMC Corp. has announced Fully Automated Storage Tiering (FAST), which will be available for EMC's Symmetrix V-Max systems later this year. FAST will automate the movement of data across multiple storage tiers based on business policies, predictive models and real-time access patterns. IBM supports automatic data migration to SSD via its Data Facility Storage Management Subsystem (DFSMS), but it's only available on the mainframe z/OS platform with DS8000 storage, with a manual two-way process still required for other systems.

Both Sun Microsystems Inc.'s Sun Storage 7000 Unified Storage Systems and NetApp Inc. filers with Performance Acceleration Modules (PAM) circumvented and solved the software challenge at a storage architecture level by using NAND flash as cache rather than as disk replacements. As a result, SSD is closely woven into their storage architectures and firmware, with the advantage that all data and apps benefit from solid-state drives, eliminating the requirement to shuffle data between tiers. "We want our Storage 7000 customers to have all of their working data in flash," said Michael Cornwell, Sun's lead technologist for flash memory.

SSD architectures

Contemporary storage systems have been designed to cope with the limitations of mechanical disk drives, in particular to reduce the impact of high latency and the low number of IOPS mechanical disks can support. With SSDs, this basic truth has changed and capacity limitations of storage controllers have become the limiting factor. Simply replacing disk drives with SSDs can overwhelm storage systems if too many solid-state drives are added. "Storage controllers are just starting to adjust to the new performance requirements of SSD, and today customers need to heed the recommendations and guidelines of storage vendors on how many SSDs they can add," said Greg Schulz, founder and senior analyst at Stillwater, Minn.-based StorageIO Group.

There are currently four methods to complement storage systems with solid-state storage:

  1. Adding SSD drives in lieu of disk drives
  2. The use of NAND flash as cache in storage controllers
  3. The use of NAND flash on servers rather than storage controllers
  4. Standalone SSD arrays

Adding SSD drives in lieu of disk drives. Adding SSD drives via Fibre Channel, SATA or SAS interfaces to replace disk drives is the easiest and most popular way of adding solid-state drive support to existing arrays. Notwithstanding rigorous testing and qualification procedures, this approach requires few if any changes to storage systems because vendors can leverage what's in place. The lack of automated data migration between SSD and disk tiers, and performance limitations of contemporary storage controllers, are the two biggest drawbacks of this approach. Nevertheless, it's the method adopted by most storage vendors. EMC has been joined by Compellent, Fujitsu, HP, Hitachi Data Systems, IBM, LSI Corp., Pillar Data Systems, Sun and many smaller array vendors, offering SSD drives in addition to hard disks for some of their arrays. The overwhelming majority of these vendors have been using STEC drives as their first generation SSDs, largely because STEC was the first vendor capable of meeting enterprise storage requirements. With disk drive vendors like Seagate Technology LLC, promising startups like Pliant Technology and SandForce, and Intel Corp. targeting the enterprise storage space, STEC's predominance will be challenged.

The use of NAND flash as cache in storage controllers. NetApp and Sun are leveraging NAND flash as cache. By doing so, both vendors have overcome the software issue of automated data migration between SSD and disk tiers, but they have changed their storage architectures to embrace NAND flash, eliminating the possibility of overwhelming their arrays if too much solid-state drive storage is added. By front-ending disk drives with NAND flash instead of replacing disk drives, all data and apps benefit from SSD, not only data that resides within the SSD tier.

NetApp offers the Performance Acceleration Module (PAM), which can be added to any NetApp filer with available PCI Express slots. Depending on the controller, up to five modules can be installed for a unified cache as large as 80 GB today and up to 512 GB later in the year when a higher density PAM card will become available. PAM is used to cache metadata only. "By storing a copy of the metadata in flash memory on the storage controller, we're seeing a 30% to 50% performance gain for typical workloads," said Patrick Rogers, vice president, solutions marketing at NetApp. "Filers with PAM and SATA drives have become a viable alternative, replacing filers with FC drives, because of comparable performance at a significantly lower cost," he said.

Unlike NetApp, Sun uses flash memory in its Sun Storage 7000 Unified Storage Systems to cache all reads and writes -- not only metadata -- and therefore has one of the most advanced architectures to support flash memory.

The Sun Storage 7000 Unified Storage Systems run Solaris on an x86 platform with an optimized storage stack and the Zettabyte File System (ZFS) that supports a Hybrid Storage Pool of DRAM cache, SSD and mechanical disks. The solid-state drive is situated between the DRAM-based Adaptive Replacement Cache (ARC) and SATA drives. The ZFS Intent Log (ZIL), which holds the write journal to allow the file system to recover from system failures, is written to a write-optimized SSD. The L2ARC cache comprises read-optimized SSDs to extend the DRAM-based ARC cache for read operations; L2ARC can be hundreds of gigabytes in size, and its purpose is to keep working data in memory to minimize disk access. This Hybrid Storage Pool enables the Sun Storage 7000 Unified Storage Systems to support more than 800,000 IOPS, according to Sun.

The use of NAND flash on servers rather than storage controllers. Although the Sun Storage 7000 Unified Storage Systems is a standalone storage system, it makes the point for those who argue that flash memory belongs in the server rather than the storage controller. "Just like L2 cache extends memory on the CPU and DRAM extends L2 cache, flash memory is intended to extend DRAM," explained David Flynn, chief technology officer at Fusion-io. The Fusion-io ioDrive and ioDrive Duo NAND flash PCI Express cards provide direct-attached storage (DAS) for servers. Being a server company that also sells storage, Sun concurs that servers are the right place for flash memory. "Flash memory is a game-changer for server architectures, and next-generation servers will extend DRAM caches with flash memory," Sun's Cornwell said.

Standalone SSD arrays. Complementing disk arrays with SSD-based storage systems that run parallel to traditional storage arrays is the least-disruptive method of adding solid-state storage to a storage environment. The leading vendor of standalone SSD arrays is Texas Memory Systems. Offering both DRAM and NAND flash-based SSD arrays, the company sells its RamSan family of products directly and through OEM relationships with BlueArc Corp., NetApp and others. On the downside, standalone solid-state systems aren't able to leverage existing array components and are therefore likely to be more expensive. Moreover, they're less integrated with the disk tier than other architectural approaches, making it even more difficult to overcome the data migration challenge between the solid-state drive and disk tiers.

Solid-state outlook

Solid-state storage has just begun to play a role in enterprise-level systems, but it's apparent that its rise is unstoppable. Enterprise storage systems are moving toward two-tier architectures, namely, a solid-state drive tier for transactional and changing data, and a large capacity SATA disk tier for more static data. With the continuous innovation that has overcome some of the limitations of NAND flash, as well as newer memory technologies like magnetoresistive random access memory (MRAM) on the horizon to eventually replace NAND flash, the real challenge to rapid adoption of SSDs is the lack of storage architectures that are capable of seamlessly integrating and efficiently taking advantage of solid-state drives.

BIO: Jacob Gsoedl is a freelance writer and a corporate director for business systems. He can be reached at jgsoedl@yahoo.com.

This was first published in September 2009

Dig deeper on Disk drives

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close