Two major virtualization options have emerged: split path and combined path. Deciding which one is best for your...
storage environment is just part of the decision-making process.
The benefits of virtualization are plentiful—centralized storage management, increased storage utilization and lower storage costs to name just a few—but behind the thin veil of standards and visions of storage utopia looms vendor lock-in. With so much riding on the implementation of this technology, you need to assess which, if any, virtualization options are mature enough to deploy. And you'll need to prepare your organization for the blessings and curses this technology will be sure to bring.
Storage vendors offer network-based block virtualization in two configurations: combined and split path. Combined-path architectures handle the data and management functions in the same logical design and appear in the following implementations:
- Appliances. Cloverleaf Communications Inc., DataCore Software Corp., FalconStor Software and IBM Corp. provide their virtualization software on off-the-shelf, Intel-based server hardware. Configured to reside in the data path between the server and storage, they're deployed as either clustered pairs or an N+1 configuration. All traffic for virtualized storage is routed through the appliance.
- Fibre Channel (FC) director blade. FC directors from Cisco Systems Inc. and Maxxan Systems Inc. support blades that run virtualization software from IBM and FalconStor, respectively. This eliminates the need for separate appliances and centralizes switch and storage hardware.
- Array-based. Hewlett-Packard (HP) Co., Hitachi Data Systems (HDS) Inc. and Sun Microsystems Inc. extend their respective StorageWorks XP12000, TagmaStore and StorEdge 9900 arrays' existing virtualization functionality to virtualize other vendors' arrays.
- Host-based. StoreAge Networking Technologies' Storage Virtualization Manager (SVM) uses host-based agents in conjunction with a management appliance that splits the data and control paths at the host level. Management of features like replication and snapshots occur over IP, while write I/Os between the server and storage occur on FC unimpeded by the appliance.
- Storage Services Platform (SSP). Brocade Communications Systems Inc., Cisco and Troika Networks Inc. offer fabric platforms that allow the data path portion of virtualization software like EMC Corp.'s Invista to be loaded onto them. These platforms deliver the high reliability and performance typically associated with FC switches by removing cache from the switch. They use Application-Specific Integrated Circuits (ASICs) to process FC traffic and execute virtualization software delivered by storage vendors.
|Network-based virtualization options|
|Click here for a comprehensive list of Network-based virtualization options (PDF).|
The devil, of course, is in the details: There are differences in how vendors configure their hardware, manage cache and handle I/O. For example, vendors with split-path approaches leave cache out of the data path. Split-path providers EMC and StoreAge find it simpler and safer to let cache reside closer to the application server or storage array and to remove the hassles of managing the data in the network cache. Virtualization applications that integrate with switch vendors' SSPs turn control of the I/O processing over to the ASICs on the FC ports on these directors. Vendors contend these switches can process I/Os faster and more effectively than the hardware provided by appliances, arrays or blades.
Despite the differences between combined- and split-path architectures, storage administrators will choose a virtualization product based less on its architecture and more on how comfortable they are with moving into a virtual environment and the vendor lock-in that will likely ensue. Ease of implementation, software code maturity and how the virtualization software is licensed will ultimately drive the wide-scale adoption of this technology.
Vendors that provide combined-path architectures offer virtualization software that operates in appliances, FC director blades or array-based configurations. Combined-path configurations handle the processing of I/O and storage service functions such as LUN masking and LUN discovery in the same logical configuration.
|Pros and cons of virtualization options|
|Click here for a comprehensive list of the pros and cons of virtualization options (PDF).|
A key differentiator among appliance configurations is whether they're implemented in a clustered or N+1 configuration. Clustered configurations such as IBM's SAN Volume Controller (SVC) operate as one logical entity that keeps the content of the cache of each node in the cluster in sync at all times, as opposed to an N+1 configuration that will be briefly out of sync when a management change occurs. The upside of the N+1 approach is that each node has no interdependencies with other nodes. If an error occurs in the core operating system or virtualization software of a node that causes a failure, that error won't replicate to other nodes in the N+1 configuration as would occur in a clustered configuration.
IBM's and FalconStor's virtualization software can operate on a blade that's inserted into an FC director. While delivering the same software management functions, there are some differences in how virtualization runs on an FC director than on an appliance:
- IBM SVC and FalconStor IPStor software can only be implemented on a Cisco MDS 9000 or Maxxan MXV500 director switch.
- FC director blades eliminate the need for another appliance and centralize the physical management of the devices.
- The software on the FC director blade can interact with software on its appliance counterpart on other SANs, regardless of the FC directors present in those SANs, to enable advanced storage management functions such as asynchronous replication.
Conversely, HDS took its existing proven line of microcode from its 9900 series of arrays and carried it over to its new TagmaStore and NSC55 platforms. HDS also mimicked the approach of other vendors by presenting a Windows image to other vendor's storage arrays to discover and virtualize LUNs on those arrays. By doing that, HDS circumvents possible interoperability issues between its array and those of other vendors because most storage vendors certify that their systems operate with Windows.
Using the same microcode on both platforms allows users to have the same storage management console for volume management across the enterprise. It also supports most major multipathing drivers such as EMC's PowerPath, IBM's MultiPath IO and Symantec Corp.'s/Veritas' Dynamic Multi-Pathing (DMP) in addition to HDS' own Hitachi Dynamic Link Manager. The biggest advantage their approach offers over any other virtualization option is the minimal amount of change users will encounter when implementing HDS TagmaStore or NSC55, assuming they're not using the advanced functions on other vendors' storage arrays.
Split-path virtualization software runs on hosts or SSPs, and splits the data and control paths with each path running on a different appliance. For host-based configurations, a host agent communicates with the management appliance over an IP connection and serves as a volume manager on the host. With SSP configurations, no agent is required on the SSP, and the control appliance uses FC and the SSP's APIs to communicate with the SSP.
In host-based designs you must prepare the host, management appliance, FC directors and storage array to allow the host to access its storage. The agent on the host communicates with the management appliance over IP, and also inserts itself into the host's data path as a volume manager that accesses and manages the storage over the host's FC connections. A host reboot is usually required to complete the agent's configuration.
Completing access to the array LUNs requires the following steps: First, FC directors must be zoned to allow the FC HBAs on both the management appliance and host to access the array LUNs. Then the array LUNs must be masked to allow the management appliance and the host to access them. Next, the management appliance accesses the LUNs on the storage array and configures them so they can be accessed by the host. The management appliance then sends that volume configuration information over IP to the agent on the host. At that point, the host can access the LUNs assigned to it over the FC SAN.
Successful implementation of this approach is predicated on the following assumptions: First, it's assumed that the same individual or group will be responsible for all aspects of the process—including the configuration on the hosts, FC directors and storage arrays. Second, the virtualization application's agents have to work on all of the different types and versions of operating systems that are—or will be—in the environment. Finally, it assumes that installing an agent on each host is OK and that the agent is permitted by network policies to communicate with the management appliance. Failure to satisfy any of these conditions precludes the virtualization technique from being used on a specific host.
EMC's Invista and Fujitsu Computer Systems Corp.'s Eternus VS900 each offer software that runs on a switch vendor's SSP. An SSP has three unique characteristics:
- Instead of requiring agents on the host or SSP, the management appliance uses FC to communicate with the SSP via APIs and uploads the data path code to the SSP via the FC connection. Exceptions to the data path code loaded onto the SSP, such as the introduction of new worldwide names (WWNs) or LUNs, are routed to the management appliance.
- The data path code resides on ASICs on the SSP in the data path between the server and storage. This configuration allows switch vendors to develop and deploy SSP hardware that optimizes the breaking apart of FC packets and lets storage vendors focus on virtualization software.
- No cache resides on the SSP. Eliminating the cache in the SSP reduces the risk of an appliance outage causing data loss.
With all vendors now implementing some of their software in the data path between hosts and storage arrays, the debate over where data path virtualization will reside is over. Yet a new point of contention has emerged: Is it best to implement virtualization in a combined- or a split-path configuration? The decision will hinge primarily on how comfortable a user is with deploying cache in the data path and how that configuration will affect performance and reliability.
|Click here for a comprehensive list of virtualization configurations (PDF).|
Combined-path providers like HDS and IBM downplay the risk of losing cached data. "Arrays already have multiple layers of cache—main cache, drawer cache, cabinet cache and disk cache," says Claus Mikkelsen, HDS' chief scientist. "Since HDS is extending the functionality of its array to manage other arrays, the only impact of introducing another layer of cache is improved performance."
Alan Petersburg, worldwide brand manager for IBM visual products, says "an IBM SVC presents no more risk than active/active controllers embedded inside storage arrays—if one controller goes offline, the other picks up where it left off."
Each of the three combined-path architectures takes steps to minimize the possibility of data loss. Appliance and FC director blade approaches have a small disk cache on the appliance or reserve some disk on an attached array to de-stage any data in its cache in the event the appliance needs to shut down. Array-based approaches simply take advantage of their own disks to de-stage any data in their caches in the event of a crisis. For any of the combined-path approaches, the main concern is that the data may not reside on the array where the rest of the host's data resides. Until the virtualization appliance or array is brought back online, the disk cache data won't be moved to the array for which it was intended.
Lock-in and implementation issues
Picking a virtualization product is a big step. It becomes almost impossible to avoid vendor lock-in the deeper you get into a virtualization implementation. With the control of an enterprise's storage at stake, vendors are very willing to help users get past their initial implementation concerns. Some vendors provide an easy virtualization exit strategy. Known variously as "encapsulated," "migration in place," "proxy" or "pass-through," this feature allows an existing LUN to be virtualized while remaining the same size and retaining all of its data. EMC's Invista, for example, encapsulates an array LUN and presents it as an Invista LUN to the host.
Both Sadowski and Mikkelsen say this proxy capability appeals to users who want to gradually implement virtualization, but also want a quick exit if things go south. Even though the LUN is encapsulated by Invista or TagmaStore, neither product writes a signature nor any meta data on the LUN because it's now virtualized. However, TagmaStore's USP platform goes one step further and can simply re-present LUNs in their native format. A user could remove the USP platform from the server's data path and allow the host to directly access the LUNs on the storage array without having to migrate out of the virtualized environment.
Yet even when LUNs are in this most basic virtualized state, users still gain some of the benefits of virtualization. For instance, with EMC's Invista, they can non-disruptively migrate data to another device, such as moving data from a Symmetrix to a Clariion LUN. The new virtual Invista LUN has all of the characteristics a LUN presented by a storage array would have, such as the ability to set a SCSI-3 reservation bit that allows for clustering and to interoperate with their multipathing software. However, users who have heterogeneous server environments with a number of different multipathing software packages installed will want to consider storage vendors such as FalconStor and HDS, which support a wider variety of vendor multipathing software packages than EMC currently does.
Of course, back-out strategies work best when you're not too far down the virtualization path. When you begin implementing advanced virtualization features, such as storage volume management and asynchronous replication, the likelihood of lock-in becomes imminent. For instance, implementing a one-to-one mapping of a virtual LUN to an array LUN allows for an easy in and out of the virtualization solution. However, most users will want to take advantage of volume management features that enable LUN groups or meta-LUNs that concatenate or stripe data across volumes on the back end and allow the storage administrator to present five 20GB LUNs as one logical 100GB volume to a host. Once this type of feature is used, users abandon nearly any hope of finding an easy way out of a specific vendor's virtualization solution.
The ramifications of backing up in a virtualized environment are also often overlooked. With all backups now running though a single interface, this is a likely spot for bottlenecks to appear. If users decide to use snapshots and asynchronous replication in lieu of or to complement backups, this will require a major restructuring of the way data is protected in the environment.
With the benefits of virtualization well known and the pain of managing SANs growing, this generation of network-based virtualization appliances delivers what the first generation of products didn't—interoperability, integration and the backing of major storage vendors. For enterprise shops, the decision about which virtualization appliance to deploy now depends less on the number of technical features it initially offers than on how stable it is, how easy it is to implement and how willing the vendor is to negotiate favorable terms. Yet with vendor lock-in likely, and the long-term control and management of enterprise data at stake, organizations should bring virtualization into their environments very cautiously.