What we found is that not many storage managers are even in the early days of implementing virtualization. In fact,...
not many of you even use the word virtualization. Richard Scannell of GlassHouse Technologies, who contributes to our Integration column, told me recently that of the last 100 IT managers he's spoken to, not one of them used the term virtualization. It's a buzzword in the worse sense - a word that hides a lot of difficult problems under one lazy umbrella.
Don't get me wrong. The concept of virtualizing storage into logical abstractions is a fine idea, in the best traditions of layered software. The problem is that the storage industry has a tendency to shout, "Virtualization!" as an answer to every problem of manageability, interoperability and automation. You, on the other hand, have specific problems that need specific solutions, not vigorous arm waving and promises of a better tomorrow.
And it should be obvious that comprehensive, enterprise-wide virtualization won't be here tomorrow or anytime soon. Virtualization may be the right way to go, but it's not a quick fix to anything. In fact, virtualization, at its present level of maturity, is likely to cause a whole raft of its own problems, such as breaking current processes like mirroring.
And there's another problem. Is virtualization a technology in search of a problem? Or is it venture capital in search of a technology in search of a problem? I don't mean to tar all startups or all virtualization vendors with the same brush, but I'm getting a certain dÉjÀ vu all over again about the buzz.
Which leads us back to real problems
Virtualization is the future, but it's not the present, at least not the magic wand variety. Much work needs to be done to understand the right architectures, the right interfaces and the right technologies to truly make virtualization practical, scalable and reliable. That can't be done in an ivory tower, but has to be hammered out by trying different approaches in different settings in real environments. It means that both the entry and exit cost for any virtualization approach has to be low enough to encourage experimentation, not a gimmick to lock users in.
Meanwhile, too many storage vendors have it backwards. They tend to advance the notion that virtualization will overcome the many limitations of point products and lack of interoperability. It seems to me that virtualization will always be a kludge until there's a solid foundation of workable management technologies and standards-based interoperability to enable virtualization. Those same technologies and standards would be of immediate benefit to storage managers, with or without a virtual environment.
We need to start evaluating storage technologies in both lights - do they help us now and will they lead to a better tomorrow.