Servers and networks have the pedal to metal, but storage is struggling to keep up. As applications crave more...
and more performance, data storage vendors will need to find new solutions.
There’s a lot of buzz around application performance and the direct connection it has with data storage performance. Server virtualization, virtual desktop infrastructure (VDI) and business intelligence/big data are some of the key forces driving this need for speed. Servers and networks are getting faster, but disk drives and the storage systems built around them aren’t keeping up. There’s also a price/performance imbalance that’s becoming alarming with the cost per I/O per second (IOPS) climbing on the storage side of the data center.
Application performance isn’t just a “special case” requirement. There are certain applications that need high performance the majority of the time. However, we often have to engineer our environments for the 10% or 20% of the time when performance is critical, which would include a much larger group of applications.
IT professionals want to increase virtual to physical server ratios from 10:1 to 50:1, but storage is the limiting factor. Some organizations need to have hundreds or thousands of virtual desktops accessing a single pool of storage but they’re limited by boot storms. And big data analytics drive the need for speed through an enormous number of transactions per second; there are solutions optimized to handle these workloads but they come at a high price.
You could always increase the performance of storage, but just how much performance are you willing to pay for? To increase IOPS you add more disk drives, create wide stripes and implement short stroking. But that can be very expensive. Alternatively, you can just add lots and lots of solid-state drives (SSDs), but we’re talking big bucks again. And what’s the right balance of price, performance and capacity for your environment? If you don’t need lots of capacity, do you really want to buy lots of disk drives just to increase IOPS? However, if you require a substantial amount of capacity, then buying SSDs will be unattractive price-wise and may not be technically practical to implement.
By placing dense and fast memory inside servers, Fusion-io has been the big winner in terms of market buzz and IPO so far. Yet the Fusion-io solution lacks in capacity and high availability, and it’s an expensive and non-shareable resource. It may also be a concern that 90% of its revenue comes from just a handful of customers.
Storage system vendors have also seen the trend for more performance and nearly all have responded with SSD options. A few have automated tiering that can move data at a sub-LUN level between tiers, including Dell Compellent with Data Progression, EMC with FAST, Hitachi Data Systems with Hitachi Dynamic Tiering and Hewlett-Packard 3PAR with Adaptive Optimization. All these solutions typically have some page or extent of varying sizes they promote/demote based on activity/inactivity.
Xiotech has a unique approach with its Hybrid ISE product using Continuous Adaptive Data Placement (CADP) that creates a single pool of storage from SSDs and hard disk drives (HDDs). Instead of promoting and demoting data based on activity/ inactivity, Xiotech monitors application performance and places data on SSD or HDD based on whether there will be an actual improvement perceivable to the user. The goal is to ensure that price, performance and capacity are in optimal balance.
There are also a number of notable startups, including Nimble Storage. Nimble is taking the world by storm with an iSCSI solution that has SSD and HDD, and leverages inline data compression to optimize capacity.
Additionally, there are pure-play SSD storage systems from companies like Nimbus Data Systems and Violin Memory. And solid-state stalwarts like Texas Memory Systems are revitalized because of the new attention to high-performance storage.
Potential customers are inundated with choices and the various options come with incredible claims of IOPS and throughput performance. Hundreds of thousands and even millions of IOPS . . . and still affordable! But an old skeptic like me knows that performance depends on a number of factors. And besides, all those marketing numbers you’re getting showered with are always based on best-case scenarios.
What happens to performance when something goes wrong? What if a disk drive fails (and we’re not just talking HDDs; solid-state drives don’t spin but they can also fail)? What happens to performance when a controller fails? How is primary application performance impacted if there’s another operation such as mirroring running? How is performance impacted as capacity utilization increases? What is performance over time: one year, two years or three years after initial implementation? These are questions that are rarely asked, and when they are, they often trip up storage vendors.
Application performance is the hot new requirement and storage is the bottleneck. The imbalance in the data center is real and will only get worse if things continue as they are. Server and desktop virtualization as well as the emergence of big data analytics as a major application all highlight the performance disadvantage that’s inherent in disk-based storage systems. The good news is that there’s a ton of investment in trying to solve this problem. The bad news is that the number of options IT professionals will have to choose from will make their heads spin; and we all know how slow and error prone that can be!
BIO: Tony Asaro is senior analyst and founder of Voices of IT.