Solid-state storage was a hot topic at the Storage Decisions Conference in Chicago last week, as expert presenters explained its benefits while warning attendees not to go overboard in deploying expensive flash storage arrays.
Sessions dealing with flash storage were among the most attended, and solid-state was the main topic during a panel discussion that closed the show, as well as in hallway discussions between sessions.
The final session was a panel discussion featuring Jon Toigo, CEO of Toigo Partners International; Marc Staimer, president of Dragon Slayer Consulting; and Howard Marks, founder and chief scientist for DeepStorage.net, answering attendee questions.
They took questions on SSD reliability, including wear leveling. Marks said that when vendors claim they use wear-leveling technology to extend flash life, they are overprovisioning the amount of flash in the array so they can just replace burned-out flash with new.
Toigo added that flash vendors include so much overprovisioned flash storage in their arrays that unless you are really pounding the flash daily, it should last five to seven years.
While attendees seemed most interested in learning about how SSDs work, how reliable they are, and when they will become more affordable, the experts framed solid-state storage as one piece of an economically sound storage strategy.
The experts agreed that solid-state is best deployed as a performance tier or acceleration tool. It can be used within servers as read caches or in internal memory as SSDs or PCI Express cards to accelerate applications with the least amount of latency. They can also be deployed as appliances between servers and a block- or file-based storage area network, or within the SAN as a performance tier. All-flash arrays are the best performing and most expensive option, and should only be deployed in situations where you need extreme random I/O performance, such as online transaction processing.
But the rise of flash doesn't mean the demise of all other storage tiers. Hard drives remain good choices for backups and applications that do not require the performance of flash, which comes with a larger price tag. Tape is well suited for archiving, especially data held for compliance and historical trending. Cloud services are a good option for some use cases, particularly as standby virtual-server environments for disaster recovery.
During his session, Decision Tree: Storage and Storage Networking for Virtualized Environments, Marks told his audience the advent of solid-state storage may require them to adjust their cost measurement from a capacity viewpoint to a performance viewpoint. To calculate the return on investment for solid-state storage, organizations need to look at the cost of SSDs in terms of performance, which means dollars per IOPS rather than dollars per gigabyte.
Solid-state storage can help virtualized environments by limiting the negative impact from the "I/O blender effect," which is the vast amounts of random I/O multiple virtual machines multiplex into storage at the same time, Marks explained. Virtual desktop infrastructure environments can cause even more problems because they produce more write operations than read operations, which is the opposite of physical desktop environments. Boot storms and antivirus operations make it even worse.
During his keynote, Toigo summed up the presenters' main recommendation. "Think," he said. "That's the motto of the day." That means vendors' recommendations to purchase more and more Tier 1 storage or all-flash-based systems shouldn't be accepted at face value. You should buy the right storage for your environment and applications, whether it's refurbished equipment, high-performance flash, Tier 1 disk-based arrays, tape archiving or cloud services.