Virtual reality: The inevitability of storage virtualization

Storage virtualization has been a controversial subject for years. But now that we know the technology actually works, what's keeping it from widespread adoption?

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Strategies to take the sting out of microcode upgrades:

Virtual reality: The inevitability of storage virtualization

Virtualization is a tried-and-true technology, so what's holding it back?

THE POWER OF TODAY'S information technology is not being used as it should be in most enterprises. IT is bogged down in problems ... and is so difficult to change that it often inhibits the implementation of new and important processes."

That's an unattributed quote from "Solving the information management puzzle: A life cycle approach" by Thomas H. Davenport and Don Cohen of Babson College in Babson Park, MA. But if it sounds like your environment, don't feel too bad. The quote is more than 20 years old. This begs the question: Are the storage initiatives we undertake a boon or a hindrance?

People who have bought into the tiered storage concept complain about the difficulty of managing disparate multitiered devices without any kind of consistency and uniformity. Storage provisioning requires multiple sets of tools and processes. Data migration is a continual challenge. These activities depend on manual processes that are prone to error and delay, and the prospect of an automated storage infrastructure appears to be unattainable. Increased complexity, and the potential for disruption and downtime, can cause one to reconsider the whole notion of tiered storage. For some organizations, getting bogged down by tiered storage challenges has led them to seriously reconsider virtualization technology. A recent GlassHouse survey on storage budgeting found that 27% of respondents listed storage virtualization as a main area of focus for 2006 (outpacing compliance and security).

Fear, uncertainty and doubt
Storage virtualization has been a controversial subject for years. In the early days, virtual memory simplified program coding by creating the illusion that an application had a nearly unlimited memory-address space. Logical volume managers (LVMs) on servers and logical units within storage arrays abstract the logical presentation of storage from physical disks, and are essential elements of enterprise storage management. Even within a physical disk, what's presented to the outside world for reading and writing is a logical block address, not a physical cylinder/track/sector number.

So, if virtualization per se is a tried-and-true IT technique, what's holding back widespread adoption of this technology for storage networks? It may be the fear, uncertainty and doubt related to any new technology. Many early adopters soured on virtualization because some first-generation products were ahead of their time, while others promised more than they could deliver. Another inhibitor may be the continuing lack of consistent storage management standards. The initial lack of virtualization support from major storage vendors also hampered adoption. While that's no longer the case, technical debates rage on concerning the best approach, such as split path vs. combined path, and intelligent switch vs. appliance vs. array-centric (see "Get ready for virtualization," Storage, December 2005).

When we consider the challenges and costs associated with managing large, complex storage infrastructures and investigate the options for simplification, we arrive at virtualization as a necessary enabling technology.

Virtualization services
Much of the functionality promised by virtualization isn't new. What is new is that this functionality is now offered at the network layer. Functions like volume management and replication work well at the host and storage layers. By pushing these out to the network, you can avoid the constraints of a single platform/vendor and benefit from consistent manageability across a range of heterogeneous platforms. Let's consider some of the services provided by virtualization and their applicability to the storage network:

  • NETWORK-BASED VOLUME MANAGEMENT. While never fully appreciated in Windows environments, anyone who has dealt with large Unix servers is likely a big fan of LVMs. But with a host-based volume manager, efforts must be duplicated for each server; and, depending on the specific environment, multiple products with different management interfaces may be required. Providing this functionality at the network layer can provide consistency and simplify deployment and ongoing management. It may also save on licensing costs (when volume management isn't bundled with the OS). Volume management can be viewed as the foundation app for network-based virtualization and, as such, this functionality is provided by EMC, Hitachi Data Systems (HDS), IBM and others in their virtualization products.


  • CLONING AND SNAPSHOT. There's considerable benefit to creating point-in-time copies at the network layer. Split mirrors or snapshots for backup, development or testing can be created on low-cost storage volumes. Managing this capability on a SAN-wide basis, and allocating and reusing storage from a common pool, leads to simplified management and consistent application of policies. Enabled largely by LVM capability, this functionality is available in some form with all virtualization products.


  • REPLICATION. At first glance, replication at the network layer should be a no-brainer, at least conceptually. Replicating heterogeneously has a number of benefits, not the least being potential hardware cost savings. Depending on the vendor's virtualization approach (and willingness to potentially displace other products), however, replication may not be available. Replication, in general, is a touchy area and there are real tradeoffs between the available options.


  • DATA MIGRATION. One of the great promises of virtualization is increased flexibility. Network virtualization can speed provisioning and simplify data migration. We've seen cases where new storage array deployments were delayed by up to a year because of challenges related to migrating from old servers and arrays. Data migration alone may justify virtualization, particularly in environments where large-scale retiering of apps is underway. The LVM capabilities within most virtualization products provide the means to transparently migrate data from one storage device to another without reallocating logical unit numbers or reconfiguring servers.


  • FILE-SYSTEM AGGREGATION. Virtualization doesn't apply only to block-level data. There are also beneficial applications of virtualization relating to file-level access within the NAS realm. The presentation of a consistent global file system can simplify provisioning and management of environments with lots of NAS devices and many shared CIFS or NFS file shares. This functionality is available via software or through appliances from Acopia, EMC (Rainfinity), NeoPath (File Director), Network Appliance (Virtual File Manager), NuView (StorageX) and others.


  • THIN PROVISIONING. Capacity planning and improving utilization are significant challenges in storage management. The ability to "overprovision" or "thinly provision" storage is a virtualization technique that holds enormous potential. By eliminating the padding of storage allocation requests in every environment, this could be the "killer app" for virtualization. 3PAR is often associated with this capability, but it's becoming available at the network virtualization layer. Specifically, NetApp provides overprovisioning with the FlexVol feature on its dedicated storage platforms and its V-Series NAS gateways. The V-Series devices support storage from Hewlett-Packard, HDS and IBM.
Business drive for virtualization
Virtualization will eventually become widespread because the business case is so compelling. While we can point out the many administrative benefits, virtualization provides two huge cost-savings functions to business: it furthers the commoditization of infrastructure and improves the liquidity of capital assets. For servers, much of the attraction of products like VMware is the masking of physical platforms making it easy to replace server A with server B. Likewise, storage virtualization will lead to further commoditization of storage platforms, particularly in the midrange. As far as asset liquidation, whether dealing with lease rollovers, technology refreshes or other business events, virtualization makes it significantly faster and easier to move hardware in and out of the environment.

None of these virtualization techniques is without its challenges. Reducing complexity in one area often increases complexity in another. Virtualization can help to automate processes, but those processes must be fundamentally sound and well thought out to begin with. Simplifying storage provisioning and masking the physical infrastructure can create new management issues. However, the rising costs associated with storage management demand automation efficiencies. These efficiencies can only be achieved through some type of abstraction of the logical from the physical, which means virtualization.

This was first published in March 2006

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close