This article can also be found in the Premium Editorial Download "Storage magazine: Overview of top tape backup options for midrange systems and networking segments."
Download it now to read this article plus other related content.
When management tools require managers to do more work, they don't get used. You need automated tools
For about a year and a half I've been yelling about the virtues of storage resource management (SRM) software on every pulpit I could find. Seemed to me, it was a no-brainer. How could IT people not see the value of a tool that told them what they had, who was using it or who was creating it? The reality is the total market for SRM tools last year (2001) was under $100 million, according to my estimate. And potentially it's way under, depending on what you include in the category. This year? I think it may reach $150 to $200 million, but not the gazillions that I, along with many others, predicted.
I believe that this type of function is absolutely necessary, and for it to become broadly implemented, it will need the next iteration of the technology - automated resource management (ARM). Today's SRM is really a discovery and reporting tool - but telling someone they're screwed up means they have to go do something to fix it. In this day of overworked people, that to me is the problem. ARM software will not only discover the assets and the associated correlations, but will also do something about it. That's value that's hard to argue with.
At a recent trade show, I was rambling on when a gentleman raised his hand and asked, "These tools will require me to put people and resources on the project in order to research the
I don't think IT folks are against investigating new technologies. Nor do I think they are entirely against buying and implementing those technologies. What the industry has to figure out is how to deliver the benefits of those technologies without impacting day-to-day operations - or fire drills, depending on which end of the projects you are on. When SRM becomes ARM, users will have the ability to create a set of policies once, and let the tools do the heavy lifting from then on.
Shouldn't the application be the one that decides it needs more of something? Shouldn't the tools decide where to put things based on the attributes associated with the data itself? Shouldn't those same tools provide a mechanism for IT to create sound storage policies and have those policies automatically adhered too?
The issue is that IT is dynamic. If it were static, this all would be pretty easy. When you dig into this problem, policy and process is really the underlying thing. ARM should be collecting and analyzing data, and then taking action based on your policies and best practices. While this issue deserves its own column, suffice it to say, as IT professionals, the first thing to recognize is that most likely your storage policies are "that's the way we've always done it." Because it was successful for you in 1992 doesn't make it effective today.
The CIO cares about SLAs and insurance. The SLAs are the deliverables they promise to the business. The insurance is how to mitigate the potential disaster when one or two key people leave. That filters down to IT itself, where policy becomes the way IT attempts to adhere to the aforementioned SLAs.
For example, the CIO says, "We need to be able to restore anything and everything, under any circumstances, within 24 hours." That's the SLA. So IT creates policies - typically ad hoc, and never centralized - that say "this class of user/application will be put on X class of storage, backed up by Y backup software under Z schedule." Then they invoke some Perl script to make it happen. If the guy that wrote the Perl script gets hit by a bus, that script tends to remain in effect ad infinitum - whether or not things change over time. Another SLA means another policy and another Perl script. You can see how this becomes a problem.
Ultimately, the right way is to have granular policies in a central place that allow flexibility for varying lines of business and different data values at different points in time - that can be consistently adhered to by all of the IT world.
We want our systems to use some of those billions of cycles to determine what is best put where and when. Then, we want to automate the actions required in order to facilitate the policy, or in the event that our policy is out of adherence, bring that deviation to our attention. ARM becomes the intelligent data mover or action taker based on what's going on in the real world, and how those results compare to our policies and procedures. Creating consistent procedures is the key to insurance - if different people do things the same way all the time, that's a huge burden lifted off of the organization.
Anyhow, I'm thinking that ARM will be a far more attractive set of products and services. Traditional SRM will become a standard feature set in enterprise storage management frameworks, as will "dumb" network mapping. Automated action-based products are the ones that will get the money.
And now for something completely different ...
San Jose, CA-based Nishan is finally getting some solid traction. One of my favorite politically incorrect companies is really starting to get some momentum with long-distance storage over IP. I've been tracking a couple of different recent installations where customers were able to put disaster recovery SANs over a great distance cheaply.
Get ready to hear a lot from Netezza. These guys have figured out how to take relatively commodity hardware and perform complex business intelligence queries about a zillion times faster than a much more expensive Sun E10K/Oracle combination.
Z-force has taken a unique twist on NAS - they created a switch that in essence virtualizes all the NAS boxes (can be any kind) behind it. That allows users to keep all their existing NAS boxes, and add more commodity NAS boxes on the back end, while the switches load balance and do all the heavy lifting. Trés cool.
This was first published in July 2002