This article can also be found in the Premium Editorial Download "Storage magazine: What you need to know about all solid-state arrays."
Download it now to read this article plus other related content.
Without question, the advent of solid-state storage has been a disruptive event
“Flash is still an order of magnitude more expensive than high-end disk drives and it’s therefore important to use it wisely and deploy it for the right tasks,” said Eric Herzog, senior vice president, product management and marketing for EMC’s Unified Storage Division.
Today, solid-state storage can be deployed in three ways:
Solid-state disks in place of mechanical disks. Replacing disk drives with SSDs is the simplest way of boosting array performance. When opting for this route, however, it’s crucial to verify with the array vendor the impact of SSD drives on the array and to heed the vendor’s guidelines. SSDs can wreak havoc on a storage system if the storage system’s processors can’t support the high performance of the solid-state storage. A performance problem can quickly turn from bad to worse if SSDs overwhelm storage controllers. Another issue relates to the mechanics of how and when data is moved to and off solid-state drives. In its simplest and least preferable form, SSD can be allocated manually to certain applications, such as database log files. While this may be the only option for older arrays, more automated mechanisms, such as EMC’s Fully Automated Storage Tiering (FAST), are preferable.
Flash as cache on storage systems. Using flash as cache to extend the relatively small DRAM cache avoids many of the challenges associated with substituting disk drives with SSD drives. Since a flash cache is part of the storage system architecture, storage controllers are designed to support whatever amount of flash the array permits. Flash as cache also resolves the tiering challenge. By definition, a cache will always contain the most active data while stale data resides on mechanical disks. While solid-state drives only benefit data that resides on SSD, a flash cache benefits all data that traverses the storage system. It’s difficult to find any drawbacks of a flash cache, but one of its downsides is that it’s only an option in newer arrays. Complementing high-capacity drives with flash cache is becoming a more common storage architecture choice because it enables arrays that combine both high capacity and high performance.
“By adding 2% to 3% of SSD to a disk-based array, you can almost double the throughput,” said Ron Riffe, business line manager, storage software at IBM.
Flash in servers. The closer data is to server processors and memory, the better the storage performance. Placing flash storage in servers via PCIe cards from the likes of Fusion-io yields optimal storage performance. On the downside, flash storage in servers usually isn’t shared and can only be used by applications that reside on the server, and it’s very expensive. Nevertheless, extending storage into servers is actively pursued by NetApp, with an initiative to make Data Ontap available to run on hypervisors, as well as EMC with its VFCache, formerly known as Project Lightning. It’s obviously the goal of both vendors to provide a very high-performance, server-side flash storage tier that integrates seamlessly with their external storage systems.
This was first published in August 2012