This article can also be found in the Premium Editorial Download "Storage magazine: Five ready-for-prime-time storage technologies."
Download it now to read this article plus other related content.
With all vendors now implementing some of their software in the data path between hosts and storage arrays, the debate over where data path virtualization will reside is over. Yet a new point of contention has emerged: Is it best to implement virtualization in a combined- or a split-path configuration? The decision will hinge primarily on how comfortable a user is with deploying cache in the data path and how that configuration will affect performance and reliability.
|Click here for a comprehensive list of virtualization configurations (PDF).|
Combined-path providers like HDS and IBM downplay the risk of losing cached data. "Arrays already have multiple layers of cache—main cache, drawer cache, cabinet cache and disk cache," says Claus Mikkelsen, HDS' chief scientist. "Since HDS is extending the functionality of its array to manage other arrays, the only impact of introducing another layer of cache is improved performance."
Alan Petersburg, worldwide brand manager for IBM visual products, says "an IBM SVC presents no more risk than active/active controllers embedded inside storage arrays—if one controller goes offline, the other picks up where it left off."
Each of the three combined-path architectures takes steps to minimize the possibility of data loss. Appliance and FC director blade approaches have a small disk cache on the appliance or reserve some disk on an attached array to de-stage any data in its cache in the event the appliance needs to shut down. Array-based approaches simply take advantage of their own disks to de-stage any data in their caches in the event of a crisis. For any of the combined-path approaches, the main concern is that the data may not reside on the array where the rest of the host's data resides. Until the virtualization appliance or array is brought back online, the disk cache data won't be moved to the array for which it was intended.
Lock-in and implementation issues
Picking a virtualization product is a big step. It becomes almost impossible to avoid vendor lock-in the deeper you get into a virtualization implementation. With the control of an enterprise's storage at stake, vendors are very willing to help users get past their initial implementation concerns. Some vendors provide an easy virtualization exit strategy. Known variously as "encapsulated," "migration in place," "proxy" or "pass-through," this feature allows an existing LUN to be virtualized while remaining the same size and retaining all of its data. EMC's Invista, for example, encapsulates an array LUN and presents it as an Invista LUN to the host.
This was first published in December 2005