Disk capacity has been the sexy specification the majority of us have latched onto, but it’s time to start thinking...
about performance and power consumption.
Back when he was at Xiotech (now XIO), Rob Peglar used to present a PowerPoint deck that included a slide depicting a butcher processing a large pile of ground beef. The unappealing image provided a memorable hook for Peglar’s point: for years, we’ve bought storage in a similar fashion, using the simple metric of dollars per pound.
Interviewing disk drive makers a few years ago, I learned a different but related truth about disk. Many disk drive industry insiders observed that their biggest sales accrued based on “larger,” rather than “faster” or “smarter.” Customers saw bigger capacity as the improvement that meant the most. A 1 TB drive was better than a nimbler 250 GB drive, and a 2 TB drive yielded more sales than a fancy 500 GB flash/hard disk hybrid unit.
Bigger is better made a certain kind of sense, of course. Knowing virtually nothing about data itself -- the contents of a given file, its business value, its criticality or its usage characteristics -- data storage administrators concerned themselves mainly with the simple problem of where they would find the elbow room to store it. And as the economy grew more challenging, vendors rewarded buyers with products that could store a lot of anonymous data to a much greater extent than they regarded (or even paid attention to) improvements in drive smarts or transfer speeds.
Peglar thought that was shortsighted, a kind of race to the bottom in the disk drive market. His former employer reached the same conclusion, deciding to discontinue its capacity line in favor of products that served a different metric altogether: IOPS per watt. XIO’s storage blade, the Intelligent Storage Element or Hybrid ISE (hybrid because of its use of flash solid-state storage to augment lower capacity, faster performing SAS disk to yield a nominal 200,000 IOPS), isn’t aimed at the casual consumer looking for mass file storage, but at planners with other needs and metrics in mind. Specifically, the company is seeking to serve applications with the extreme performance they require with the lowest possible power consumption. I see this strategy as important.
The cost of energy continues to climb. According to an article published not long ago in USA Today, the cost of utility power had climbed by approximately 22% nationwide in less than 18 months. Moreover, in data center-heavy parts of the U.S. grid, demand for power was exceeding the availability of circuits, reflecting a delivery grid that was designed long before the information age. So, whether you think climate change is real and want to go “green,” or you’re simply confronted by nagging issues regarding energy expense and availability, “per watt” metrics are increasingly important in storage decision making.
It’s also an increasingly important dimension of efficient storage architecture. Clearly, we can’t keep throwing more spindles at workloads to improve IOPS, given the resulting power demand of such strategies. That’s why I have to chuckle when HP/3PAR carries on about having the fast hand in storage at 400,000+ IOPS. Short-stroking a lot of spindles does buy speed, but at a non-trivial expense in terms of power.
A better approach is to augment disk with flash, moving hot data into flash temporarily until its access profile cools, then writing requests back over to the disk. There are fewer drives to power in such a design. Another approach is to virtualize all disk and realize fast I/O out of the DRAM of the storage virtualization host, à la DataCore Software’s SANsymphony-V. This involves a similar kind of “spoofing” that you see every day in, say, the products of leading network-attached storage (NAS) vendors whose back-end storage is actually quite sluggish. To buy speed, vendors like NetApp simply use lots of memory to acknowledge writes before they’re actually made, queuing them up on the back end to conceal how slow their RAID/WAFL system actually is.
IOPS per watt has an underdiscussed corollary in capacity per watt, a second important metric. Back when green was in fashion, storage vendors encouraged firms to green their storage by “re-driving arrays” -- pulling low-capacity drives and replacing them with higher capacity drives to get more capacity with the same power consumption. Aside from this usually requiring a “forklift upgrade” of the overall unit to accommodate the new drives (a point that vendors conveniently overlooked in their marketing materials evangelizing the strategy), the basic idea made sense.
However, it really comes to fruition when taken to its logical conclusion; what I call “NAS on steroids.” A slew of announcements in late 2011 described a new cobble of storage combining a tape library, perhaps a front-end disk cache and the Linear Tape File System (LTFS) instantiated on a server that could be mounted as a file share using a network file system. This design delivers high-density file storage (in the many tens of petabytes) with extremely low power requirements in the space of a single raised-floor tile, depending on the library. IBM, Spectra Logic and others are pushing the library parts of the kit, while Crossroads Systems has jumped into the limelight once again with its StrongBox appliance tricked out to deliver the disk cache, file system and NAS mount.
NAS on steroids is the right IOPS-per-watt solution for file repositories containing data with low rates of re-reference. Who cares if accessing a rarely requested document entails the same delay as the World Wide Wait? How long did it take you to download and read this file, and when might you reference it again?
IOPS per watt is an increasingly meaningful metric for those who want to build storage that fits both business needs and milieu realities. Power ain’t getting cheaper.
BIO: Jon William Toigo is a 30-year IT veteran and is CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.
Post-publication correction: I mistakenly asserted that HP/3PAR’s 450,000 IOPS record on the Storage Performance Council’s SPC Benchmark was achieved by short-stroking disk. I was informed this wasn’t the case, as the workload was spread across 1,900 drives that weren’t short stroking. While the rig does support short stroking, the technique wasn’t used in this test.