This Content Component encountered an error
This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Mobile cloud storage best practices:

It’s almost taken for granted that “new” storage tiering regimens require tier-zero SSDs, but does that add up to a better way to tier storage or a better way to sell more storage?

When I learned IBM was sponsoring a tweetchat the other day, featuring the smart storage guys at Wikibon, I had to crash the party. I suspected there would be a lot of shilling for IBM products (there wasn’t) and for flash solid-state drive (SSD) devices (there was), but I wanted to hear the latest value case being advanced to justify the technology. (You can read the tweets in chronological order here.)

Half joking, I asked what the industry was calling “storage tiering” this week, given the propensity for drift in the meaning of technical terms. It caused me some consternation when a panelist asked, “What do you think it means?”

I offered that traditional tiering involved the movement of data between different media based on the data’s re-reference rate and the cost of the media. It was strictly a storage allocation play as opposed to archive, which involves a much more granular understanding of data and its business context, and aims at storage utilization efficiency. One panelist agreed and offered that tiering was “matching data and device characteristics to minimize costs and maximize performance.” Another said, “Metrics that decide tiering [are] performance, availability, shareability, cost to purchase, cost to maintain and service.” That sounded like what IBM was saying at mainframer school years ago: so far, so good.

@pb

What surprised me was the panel’s insistence that flash SSD changes everything; that it establishes a new tier and will shortly displace high-performance/low-capacity disk as the first target for data writes. To my mind, placing memory-based storage into the traditional model didn’t alter classic tiering at all. Old-timers know what I’m talking about: The old DFSMS/DFHSM mainframe storage paradigm had memory as the first storage tier. We were told to move data quickly out of memory (a scarce and expensive resource) and onto DASD (for the Millennials, that’s another name for a disk array). Finally, we were to move data out to tape (again, for newbies, that’s a storage device comprising streaming mylar storage medium offering high capacity/lowish cost). Given the 40-year pedigree of this model for tiering, was flash SSD really any sort of game changer?

Yes, said one panelist. Flash SSD is being introduced to provide faster response for tier-one data access. Apparently, we all need faster access to data, especially for data stuck behind the “I/O blender of a server hypervisor,” one tweet said. That vendors realize better margins from the sale of flash SSD products has nothing to do with it, I pondered without tweeting.

A panelist said we needed to build monolithic storage in which flash SSD, fast disk and capacity disk were all in one box to simplify the automation of tiering. Such an approach would show up in labor cost savings, assuming smart controller wares could detect when data needed to be moved and could execute that function automatically. Huge time saver for storage admins, they argued.

That moment crystallized something for me: If the traditional storage model is too labor intensive, and requires a monolithic solution, that would explain the rig architectures I’ve seen in the flagship products peddled by big-box vendors. I wonder, however, whether this is the best approach from either a design or financial efficiency standpoint.

Vendors have been cobbling together arrays that combine flash SSD and two tiers of disk in one crate. Then they’re adding “smart” tiering software to array controllers to move data between the memory, the fast disk (sometimes back to memory if certain data is requested a lot) and then to the capacity disk. These autotiering arrays are sold as labor cost reducers, saving on storage admin time.

But blurring the lines between performance and capacity media in a single box makes sense to vendors for another reason: They charge a lot more for the performance and capacity media (and for the software license) to compensate themselves for their effort at delivering pre-integrated and smarter tiered storage. I’ve yet to see an ROI analysis that compares the cost savings accrued to this approach vs. manual data movements across less smart storage kits.

Also missing from most of the monolithic tiering approaches is that last destination for old data: tape. There was a brief discussion of tape in this chat, mainly because IBM is part of the gang hoping to reinvigorate tape with the Linear Tape File System (LTFS). I happen to like LTFS in its TapeNAS configuration, but I worry that it will be years before big-box movers complete rigs that use the technology. For the record, Crossroads Systems’ StrongBox, an LTFS TapeNAS head, is already pointing the way.

In the end, I found myself asking, “What problem are we trying to solve with autotiering from flash to disk, and maybe someday to tape?” If it’s just about allocating capacity, then like dedupe and compression, it’s only a holding action in a war with data growth and retention that we’re doomed to lose. Tiers of storage might just as well be a recipe for tears of storage.

BIO: Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.

This was first published in September 2012

Dig deeper on Tiered storage

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close