Storage virtualization is still in its early stages. Storage administrators love the idea of seeing all the storage in the enterprise as a unified whole regardless of location, device characteristics or any other consideration. But the reality of storage virtualization falls short of the ideal. As a result, storage administrators who are designing virtual storage for their enterprises need to consider tradeoffs carefully.
According to the Aberdeen Group, a consulting firm based in Boston, such a storage system should meet three criteria: First, it should be robust enough to be reliable in all circumstances (scalable as well). Second, it should be flexible enough to be vendor and protocol neutral. Third, it should be manageable in a cost-effective manner.
In practice, currently available storage virtualization systems don't completely meet all three conditions. Storage administrators have to decide how to balance the criteria against cost and availability when designing a system. For example, a proprietary SAN with all the equipment from a single vendor is likely to be reliable and manageable, but it locks the user in and may not be as scalable as the enterprise needs to support future growth.
Then too, Aberdeen Group says, peace of mind is likely to override other considerations. As an Aberdeen white paper puts it: "No IT manager wants to answer questions about why a particular choice was made after a mission or business critical application has gone down."
The Aberdeen white paper "The Business Case for a Storage Virtualization Engine" is available on Vicom's Web site at: www.vicom.com/pdfs/aberdeen.doc.
Rick Cook has been writing about mass storage since the days when the term meant an 80K floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last twenty years he has been a freelance writer specializing in storage and other computer issues.