The phrase "information lifecycle management" seemed to serve as a cure for insomnia when it was first introduced. Even its acronym -- ILM -- failed to catch on in an industry that loves acronyms. And saying "ILM" to a storage manager produced glazed eyes, a stony silence or both.
But the concept of moving data to the most appropriate type of storage based on its current usefulness (or age) still sounds like an idea worth waking up to, doesn't it? Everybody's swimming upstream against a rising tide of data with fewer and fewer dollars to keep them afloat, so why wouldn't you want to ensure that you're not blowing bucks on expensive storage for data with little or no value?
Most shops do care and are taking a hard look at where they put their data. You don't hear a lot of "ILM" chatter but, hey, that's exactly what it is. When the idea of ILM rolled around to open systems -- hijacked from the mainframe world's hierarchical storage management (HSM) -- more people seemed to be hung up on determining the value of the data rather than its ideal location.
As a result, data classification became a new catch phrase, and a handful of companies with classification technologies sprung up. The premise was that you needed to know more about a piece of data than when it was created, when it was changed last or how big it was. All of that can be useful information, but you need to have some real intelligence about that data if you have any hope of determining its proper disposition.
People say the more you know, the better. So why not crack open your data files and see what's inside? After all, you can't tell if data should hang around on premium platters or be shelved to some near-line system or the equivalent of storage Siberia if you don't know its true worth. But to know all that, you would need to get your business units involved, which is about the time ILM gets laid to rest.
But you can't keep a good idea down, and ILM is back and being taken more seriously than ever. Saying "ILM" in public is still a no-no, but whatever you call it -- storage tiering or simply smart storage management -- it's back. What's different this time is that we're focused on the problem. We're looking at location, the placement of data, much more closely. We've essentially stopped looking for a perfect solution long enough to consider what might be good enough or at least expedient.
But that explanation is a little too simple; ILM is back because we have more choices about where to put things than we did before. Solid-state storage might be the key catalyst for ILM's renewal. When solid state began to trickle into enterprise storage systems, the debate was over how to determine what applications, if any, were worth the incredibly high price of flash. Solid-staters said forget $/GB and think in terms of dollars per I/O, which added a new dimension to the argument. Eventually, someone realized that rather than parking data on solid-state storage, we should just let it hang out there for as long as it's needed.
So the idea of moving data dynamically and automatically came into play. Forget about cracking files or indexing content; let's just see how often and how fast the data is needed. Not every company has the app or the bucks to add a pretty expensive tier at the top of the storage triangle, but the same principles could be used to move data around from, say, SAS to SATA. There might not be sophisticated data classification going on behind the curtain, but it's a practical solution.
Cloud storage, being taken more and more seriously by enterprises every day, tosses yet another tier into the mix. And clever startups like StorSimple and Nasuni have built appliances that almost seamlessly integrate the cloud with data center storage.
And now that LTO-5 is here, tape is suddenly cool again. LTO-5's 3 TB capacity and 240 MBps throughput (both with compression) definitely reinforce tape's status as a bona fide storage tier.
If your storage vendor doesn't offer some form of automated data movement, ask when it will. Just as thin provisioning is already entrenched in most enterprise storage systems, and the way data reduction is moving along that same route, automated tiering will become a basic part of a storage vendor's system management set. If it isn't, then you might want to consider another vendor.
BIO: Rich Castagna (firstname.lastname@example.org) is editorial director of the Storage Media Group.
* Click here for a sneak peek at what's coming up in the October 2010 issue.
- Accelerating Time-to-Value: Fast-Growing Reduxio Implements Priority Engine to ... –TechTarget
- Storage in a Virtual Environment: Expert Answers to 4 FAQ –SearchStorage.com
- Focus on Storage in a VMware Environment –SearchConvergedInfrastucture