This article can also be found in the Premium Editorial Download "Storage magazine: Mobile cloud storage best practices."
Download it now to read this article plus other related content.
What surprised me was the panel’s insistence that flash SSD changes everything; that it establishes a new tier and will shortly displace high-performance/low-capacity disk as the first target for data writes. To my mind, placing memory-based storage into the traditional model didn’t alter classic tiering at all. Old-timers know what I’m talking about: The old DFSMS/DFHSM mainframe storage paradigm had memory as the first storage tier. We were told to move data quickly out of memory (a scarce and expensive resource) and onto DASD (for the Millennials, that’s another name for a disk array). Finally, we were to move data out to tape (again, for newbies, that’s a storage device comprising streaming mylar storage medium offering high capacity/lowish cost). Given the 40-year pedigree of this model for tiering, was flash SSD really any sort of game changer?
Yes, said one panelist. Flash SSD is being introduced to provide faster response for tier-one data access. Apparently, we all need faster access to data, especially for data stuck behind the “I/O blender of a server hypervisor,” one tweet said. That vendors realize better margins from the sale of flash SSD products has nothing to do with it, I pondered without tweeting.
A panelist said we needed to build monolithic storage in which flash SSD, fast disk and capacity disk were all in one box to simplify the automation of tiering. Such an approach would show up in labor cost savings, assuming smart controller
That moment crystallized something for me: If the traditional storage model is too labor intensive, and requires a monolithic solution, that would explain the rig architectures I’ve seen in the flagship products peddled by big-box vendors. I wonder, however, whether this is the best approach from either a design or financial efficiency standpoint.
Vendors have been cobbling together arrays that combine flash SSD and two tiers of disk in one crate. Then they’re adding “smart” tiering software to array controllers to move data between the memory, the fast disk (sometimes back to memory if certain data is requested a lot) and then to the capacity disk. These autotiering arrays are sold as labor cost reducers, saving on storage admin time.
But blurring the lines between performance and capacity media in a single box makes sense to vendors for another reason: They charge a lot more for the performance and capacity media (and for the software license) to compensate themselves for their effort at delivering pre-integrated and smarter tiered storage. I’ve yet to see an ROI analysis that compares the cost savings accrued to this approach vs. manual data movements across less smart storage kits.
Also missing from most of the monolithic tiering approaches is that last destination for old data: tape. There was a brief discussion of tape in this chat, mainly because IBM is part of the gang hoping to reinvigorate tape with the Linear Tape File System (LTFS). I happen to like LTFS in its TapeNAS configuration, but I worry that it will be years before big-box movers complete rigs that use the technology. For the record, Crossroads Systems’ StrongBox, an LTFS TapeNAS head, is already pointing the way.
In the end, I found myself asking, “What problem are we trying to solve with autotiering from flash to disk, and maybe someday to tape?” If it’s just about allocating capacity, then like dedupe and compression, it’s only a holding action in a war with data growth and retention that we’re doomed to lose. Tiers of storage might just as well be a recipe for tears of storage.
BIO: Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.
This was first published in September 2012