Get ready for virtualization

The benefits of virtualization are apparent, but getting there is another matter. Many products can deliver some form of virtualization, but behind the promises of storage utopia looms vendor lock-in. But even if the rewards are greater than the risks, you still need to assess which virtualization options are mature enough to deploy.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Five ready-for-prime-time storage technologies:

Two major virtualization options have emerged: split path and combined path. Deciding which one is best for your storage environment is just part of the decision-making process.

The benefits of virtualization are plentiful—centralized storage management, increased storage utilization and lower storage costs to name just a few—but behind the thin veil of standards and visions of storage utopia looms vendor lock-in. With so much riding on the implementation of this technology, you need to assess which, if any, virtualization options are mature enough to deploy. And you'll need to prepare your organization for the blessings and curses this technology will be sure to bring.

Storage vendors offer network-based block virtualization in two configurations: combined and split path. Combined-path architectures handle the data and management functions in the same logical design and appear in the following implementations:

  • Appliances. Cloverleaf Communications Inc., DataCore Software Corp., FalconStor Software and IBM Corp. provide their virtualization software on off-the-shelf, Intel-based server hardware. Configured to reside in the data path between the server and storage, they're deployed as either clustered pairs or an N+1 configuration. All traffic for virtualized storage is routed through the appliance.


  • Fibre Channel (FC) director blade. FC directors from Cisco Systems Inc. and Maxxan Systems Inc. support blades that run virtualization software from IBM and FalconStor, respectively. This eliminates the need for separate appliances and centralizes switch and storage hardware.


  • Array-based. Hewlett-Packard (HP) Co., Hitachi Data Systems (HDS) Inc. and Sun Microsystems Inc. extend their respective StorageWorks XP12000, TagmaStore and StorEdge 9900 arrays' existing virtualization functionality to virtualize other vendors' arrays.
Split-path architectures separate the data and control functions so that a different appliance handles each function. These show up in the following ways:
  • Host-based. StoreAge Networking Technologies' Storage Virtualization Manager (SVM) uses host-based agents in conjunction with a management appliance that splits the data and control paths at the host level. Management of features like replication and snapshots occur over IP, while write I/Os between the server and storage occur on FC unimpeded by the appliance.


  • Storage Services Platform (SSP). Brocade Communications Systems Inc., Cisco and Troika Networks Inc. offer fabric platforms that allow the data path portion of virtualization software like EMC Corp.'s Invista to be loaded onto them. These platforms deliver the high reliability and performance typically associated with FC switches by removing cache from the switch. They use Application-Specific Integrated Circuits (ASICs) to process FC traffic and execute virtualization software delivered by storage vendors.
Network-based virtualization options
Click here for a comprehensive list of Network-based virtualization options (PDF).
Both approaches require access to the FC director as a blade or through FC connections, as well as the ability to access and manage the LUNs on the storage arrays they virtualize. Also, whether host-based or delivered as part of the SSP, the virtualization functions must be able to manipulate the LUNs presented by the storage arrays by either combining them to create a larger logical volume or carving out just a portion of a LUN and presenting it as a small volume to the host.

The devil, of course, is in the details: There are differences in how vendors configure their hardware, manage cache and handle I/O. For example, vendors with split-path approaches leave cache out of the data path. Split-path providers EMC and StoreAge find it simpler and safer to let cache reside closer to the application server or storage array and to remove the hassles of managing the data in the network cache. Virtualization applications that integrate with switch vendors' SSPs turn control of the I/O processing over to the ASICs on the FC ports on these directors. Vendors contend these switches can process I/Os faster and more effectively than the hardware provided by appliances, arrays or blades.

Despite the differences between combined- and split-path architectures, storage administrators will choose a virtualization product based less on its architecture and more on how comfortable they are with moving into a virtual environment and the vendor lock-in that will likely ensue. Ease of implementation, software code maturity and how the virtualization software is licensed will ultimately drive the wide-scale adoption of this technology.

Combined-path architectures
Vendors that provide combined-path architectures offer virtualization software that operates in appliances, FC director blades or array-based configurations. Combined-path configurations handle the processing of I/O and storage service functions such as LUN masking and LUN discovery in the same logical configuration.

Pros and cons of virtualization options
Click here for a comprehensive list of the pros and cons of virtualization options (PDF).
Virtualization appliances are built on commodity Intel servers running a Windows or Linux operating system with software that permits either a clustered or N+1 configuration. Appliances that support grid architectures allow organizations to harness the consistently increasing speed of cache, CPUs and FC host bus adapters (HBAs) to create a high-availability, low-cost virtualized storage environment.

A key differentiator among appliance configurations is whether they're implemented in a clustered or N+1 configuration. Clustered configurations such as IBM's SAN Volume Controller (SVC) operate as one logical entity that keeps the content of the cache of each node in the cluster in sync at all times, as opposed to an N+1 configuration that will be briefly out of sync when a management change occurs. The upside of the N+1 approach is that each node has no interdependencies with other nodes. If an error occurs in the core operating system or virtualization software of a node that causes a failure, that error won't replicate to other nodes in the N+1 configuration as would occur in a clustered configuration.

IBM's and FalconStor's virtualization software can operate on a blade that's inserted into an FC director. While delivering the same software management functions, there are some differences in how virtualization runs on an FC director than on an appliance:

  • IBM SVC and FalconStor IPStor software can only be implemented on a Cisco MDS 9000 or Maxxan MXV500 director switch.


  • FC director blades eliminate the need for another appliance and centralize the physical management of the devices.


  • The software on the FC director blade can interact with software on its appliance counterpart on other SANs, regardless of the FC directors present in those SANs, to enable advanced storage management functions such as asynchronous replication.
The third combined-path architecture is the array-based option found in two configurations. First, at the enterprise level, TagmaStore Universal Storage Platform (USP) is available from HDS, but it's also rebranded by HP as StorageWorks XP12000 and by Sun as StorEdge 9900. For the midrange market, HDS offers its NSC55 platform, while Sun offers its StorEdge 6920 array. All of these arrays extend their virtualization capabilities to virtualize other arrays. Of these products, Sun's StorEdge 6920 is at a competitive disadvantage. Being late to market, mostly unproven and introducing microcode that's new to most environments, users may find implementing and standardizing on StorEdge 6920 too great a leap to make because it introduces a new array as well as new software into their environments.

Conversely, HDS took its existing proven line of microcode from its 9900 series of arrays and carried it over to its new TagmaStore and NSC55 platforms. HDS also mimicked the approach of other vendors by presenting a Windows image to other vendor's storage arrays to discover and virtualize LUNs on those arrays. By doing that, HDS circumvents possible interoperability issues between its array and those of other vendors because most storage vendors certify that their systems operate with Windows.

Using the same microcode on both platforms allows users to have the same storage management console for volume management across the enterprise. It also supports most major multipathing drivers such as EMC's PowerPath, IBM's MultiPath IO and Symantec Corp.'s/Veritas' Dynamic Multi-Pathing (DMP) in addition to HDS' own Hitachi Dynamic Link Manager. The biggest advantage their approach offers over any other virtualization option is the minimal amount of change users will encounter when implementing HDS TagmaStore or NSC55, assuming they're not using the advanced functions on other vendors' storage arrays.

Split-path architectures
Split-path virtualization software runs on hosts or SSPs, and splits the data and control paths with each path running on a different appliance. For host-based configurations, a host agent communicates with the management appliance over an IP connection and serves as a volume manager on the host. With SSP configurations, no agent is required on the SSP, and the control appliance uses FC and the SSP's APIs to communicate with the SSP.

In host-based designs you must prepare the host, management appliance, FC directors and storage array to allow the host to access its storage. The agent on the host communicates with the management appliance over IP, and also inserts itself into the host's data path as a volume manager that accesses and manages the storage over the host's FC connections. A host reboot is usually required to complete the agent's configuration.

Completing access to the array LUNs requires the following steps: First, FC directors must be zoned to allow the FC HBAs on both the management appliance and host to access the array LUNs. Then the array LUNs must be masked to allow the management appliance and the host to access them. Next, the management appliance accesses the LUNs on the storage array and configures them so they can be accessed by the host. The management appliance then sends that volume configuration information over IP to the agent on the host. At that point, the host can access the LUNs assigned to it over the FC SAN.

Successful implementation of this approach is predicated on the following assumptions: First, it's assumed that the same individual or group will be responsible for all aspects of the process—including the configuration on the hosts, FC directors and storage arrays. Second, the virtualization application's agents have to work on all of the different types and versions of operating systems that are—or will be—in the environment. Finally, it assumes that installing an agent on each host is OK and that the agent is permitted by network policies to communicate with the management appliance. Failure to satisfy any of these conditions precludes the virtualization technique from being used on a specific host.

SSP
EMC's Invista and Fujitsu Computer Systems Corp.'s Eternus VS900 each offer software that runs on a switch vendor's SSP. An SSP has three unique characteristics:

  1. Instead of requiring agents on the host or SSP, the management appliance uses FC to communicate with the SSP via APIs and uploads the data path code to the SSP via the FC connection. Exceptions to the data path code loaded onto the SSP, such as the introduction of new worldwide names (WWNs) or LUNs, are routed to the management appliance.


  2. The data path code resides on ASICs on the SSP in the data path between the server and storage. This configuration allows switch vendors to develop and deploy SSP hardware that optimizes the breaking apart of FC packets and lets storage vendors focus on virtualization software.


  3. No cache resides on the SSP. Eliminating the cache in the SSP reduces the risk of an appliance outage causing data loss.
The new holy war
With all vendors now implementing some of their software in the data path between hosts and storage arrays, the debate over where data path virtualization will reside is over. Yet a new point of contention has emerged: Is it best to implement virtualization in a combined- or a split-path configuration? The decision will hinge primarily on how comfortable a user is with deploying cache in the data path and how that configuration will affect performance and reliability.

Virtualization configurations
Click here for a comprehensive list of virtualization configurations (PDF).
Vendors with combined-path products concede that cache introduces certain risks that split-path configurations avoid. About "99.99% of the time, deploying cache in the network works fine; but what about the other .01% of the time?" asks Rob Sadowski, EMC's Invista product manager. Dealing with cache consistency and coherency across multiple nodes isn't trivial, although these issues are addressed with some solutions. Sadowski adds that "removing the cache out of the data path eliminates the need for EMC to solve that problem."

Combined-path providers like HDS and IBM downplay the risk of losing cached data. "Arrays already have multiple layers of cache—main cache, drawer cache, cabinet cache and disk cache," says Claus Mikkelsen, HDS' chief scientist. "Since HDS is extending the functionality of its array to manage other arrays, the only impact of introducing another layer of cache is improved performance."

Alan Petersburg, worldwide brand manager for IBM visual products, says "an IBM SVC presents no more risk than active/active controllers embedded inside storage arrays—if one controller goes offline, the other picks up where it left off."

Each of the three combined-path architectures takes steps to minimize the possibility of data loss. Appliance and FC director blade approaches have a small disk cache on the appliance or reserve some disk on an attached array to de-stage any data in its cache in the event the appliance needs to shut down. Array-based approaches simply take advantage of their own disks to de-stage any data in their caches in the event of a crisis. For any of the combined-path approaches, the main concern is that the data may not reside on the array where the rest of the host's data resides. Until the virtualization appliance or array is brought back online, the disk cache data won't be moved to the array for which it was intended.

Lock-in and implementation issues
Picking a virtualization product is a big step. It becomes almost impossible to avoid vendor lock-in the deeper you get into a virtualization implementation. With the control of an enterprise's storage at stake, vendors are very willing to help users get past their initial implementation concerns. Some vendors provide an easy virtualization exit strategy. Known variously as "encapsulated," "migration in place," "proxy" or "pass-through," this feature allows an existing LUN to be virtualized while remaining the same size and retaining all of its data. EMC's Invista, for example, encapsulates an array LUN and presents it as an Invista LUN to the host.

Both Sadowski and Mikkelsen say this proxy capability appeals to users who want to gradually implement virtualization, but also want a quick exit if things go south. Even though the LUN is encapsulated by Invista or TagmaStore, neither product writes a signature nor any meta data on the LUN because it's now virtualized. However, TagmaStore's USP platform goes one step further and can simply re-present LUNs in their native format. A user could remove the USP platform from the server's data path and allow the host to directly access the LUNs on the storage array without having to migrate out of the virtualized environment.

Yet even when LUNs are in this most basic virtualized state, users still gain some of the benefits of virtualization. For instance, with EMC's Invista, they can non-disruptively migrate data to another device, such as moving data from a Symmetrix to a Clariion LUN. The new virtual Invista LUN has all of the characteristics a LUN presented by a storage array would have, such as the ability to set a SCSI-3 reservation bit that allows for clustering and to interoperate with their multipathing software. However, users who have heterogeneous server environments with a number of different multipathing software packages installed will want to consider storage vendors such as FalconStor and HDS, which support a wider variety of vendor multipathing software packages than EMC currently does.

Of course, back-out strategies work best when you're not too far down the virtualization path. When you begin implementing advanced virtualization features, such as storage volume management and asynchronous replication, the likelihood of lock-in becomes imminent. For instance, implementing a one-to-one mapping of a virtual LUN to an array LUN allows for an easy in and out of the virtualization solution. However, most users will want to take advantage of volume management features that enable LUN groups or meta-LUNs that concatenate or stripe data across volumes on the back end and allow the storage administrator to present five 20GB LUNs as one logical 100GB volume to a host. Once this type of feature is used, users abandon nearly any hope of finding an easy way out of a specific vendor's virtualization solution.

Running backups
The ramifications of backing up in a virtualized environment are also often overlooked. With all backups now running though a single interface, this is a likely spot for bottlenecks to appear. If users decide to use snapshots and asynchronous replication in lieu of or to complement backups, this will require a major restructuring of the way data is protected in the environment.

With the benefits of virtualization well known and the pain of managing SANs growing, this generation of network-based virtualization appliances delivers what the first generation of products didn't—interoperability, integration and the backing of major storage vendors. For enterprise shops, the decision about which virtualization appliance to deploy now depends less on the number of technical features it initially offers than on how stable it is, how easy it is to implement and how willing the vendor is to negotiate favorable terms. Yet with vendor lock-in likely, and the long-term control and management of enterprise data at stake, organizations should bring virtualization into their environments very cautiously.

This was first published in December 2005

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close