Published: 12 Mar 2004
Yes, there is something really interesting about information life cycle management (ILM), despite what you've heard to date. This year's model of ILM is all buzz, and not much more. Now let's get to the good part.
There are actually two types that you have to consider: the infrastructural perspective (let's call that data life cycle management because I haven't thought up a better term as of yet), and the application information management perspective.
Data life cycle management is the buzz today. Now that we have multiple classes of storage devices available to house our data, we can put the right data--with the right economic justification--on the right storage device, hence maximizing our economic advantage while it's stored there. This is very cool--because we tend to stick stuff wherever we originally stuck it--and it's also the easiest of problems to solve (not that we have yet, mind you).
The hard part is creating, maintaining and deriving value from the data/information that sits out on storage, wherever it sits in the horizontal spectrum of expensive disk, cheap disk, medium disk, bulk disk, disk-like tape and tape-like tape. Doing that requires application access. Storing stuff you never use has limited value, but being able to appropriately store stuff you will use in the most economic fashion available and accessing it in order to meet business objectives is the real mission.
That's where ILM gets interesting. Ninety percent of the digital content created in the high-tech world has been transactional. Oracle owns the storage management business for this part of our lives. Excel manages our assets and Oracle manages our data. Anyone who thinks differently is fooling themselves.
Oracle works fine for managing relatively small databases of records--keeping track of what is where. First, it did this all alone; now it's done via a file system to help keep things straight.
What Oracle sucks at is managing, mining and using huge data repositories. That's where a change is coming with object-based storage. That's a fancy term for "a better way to index, access and manage a bazillion objects" such as files, blocks or whatever. The result is that an object access layer can keep track of not only the data, but data about the data. If that layer is applied to storage (as in EMC's Centera and products from companies such as Avamar, Exagrid, Netezza and others), you get immediate benefits.
But what if we had one object-access layer across the entire storage infrastructure? Then our applications layer wouldn't have to care about where things are, which is an unnatural act for applications, anyway. Data could then be anywhere in the continuum of infrastructure and applications would be able to get to them easily.
Permabit's Permeon is a software layer that can sit on a server, in the array or in the network; but I think the value is on a server above the storage itself. I think Permabit--or a company like Permabit--can revolutionize the database business. Over the next five years, 60% of the data created will be non-transactional, "reference" data with entirely different attribute requirements from traditional databases. If it is stored in a way that ties back to the application, not only would you have four times the data you would otherwise have under management, but possibly billions of objects from which to derive value. The potential benefits of that will force a new way of doing things. Now that's an interesting story. Of course, I think bigger than most....
- A guide to hybrid flash storage arrays –ComputerWeekly.com
- Demystifying storage performance metrics –ComputerWeekly.com
- Acronis Backup: Transforming Data Protection with Blockchain and Ransomware ... –Acronis
- Containers and storage 101: The fundamentals of container storage –ComputerWeekly.com