| There are numerous ways to embed virtualization into your storage infrastructure. Here are the pros and cons of the various approaches.
Where to embed virtualization and when to use it depends on the size of the infrastructure, the type of apps running in it, and the levels of control and visibility required by admins.
Today, storage managers need to balance the benefits of virtualization against the complexity it brings. We'll look at seven of the largest storage virtualization providers in North America. Each vendor offers storage virtualization at one or more layers in the storage infrastructure, as well as complementary storage resource management (SRM) software that provides visibility, reporting and management across the multiple layers of the virtual storage infrastructure (see "Storage virtualization and SRM software, (PDF)" and "Virtualization guidelines,"next page).
There's no right answer regarding the best place to implement virtualization, or at what layer and how much to virtualize. Basically, users have a choice of three approaches to address their combined storage virtualization and SRM software needs:
| Single-vendor approach
The single-vendor approach means that if you buy all of your storage gear from one vendor, virtualization will work just fine. EMC Corp. and IBM Corp. actively promote this road of dependency when customers buy EMC Symmetrix and IBM System Storage DS8000 storage systems and then install EMC PowerPath and IBM Subsystem Device Driver (SDD) path management software on their servers.
This path management software complements host-based virtualization by identifying the specific characteristics of the LUNs presented by EMC's and IBM's storage systems, such as which storage controller is the primary controller in a dual-active configuration, and sending IO traffic to that controller. The path management software then works with native operating system volume managers to combine storage system LUNs that are presented down two or more Fibre Channel (FC) paths so they appear to the host volume manager as the same LUN. It also provides load balancing and path failover down the different FC paths to the LUN.
But once a specific vendor's path management software is installed, it becomes more difficult to add another vendor's storage systems or network-based virtualization products. IBM's SDD software only works with IBM storage systems or IBM's network-based System Storage SAN Volume Controller (SVC) storage virtualization product. PowerPath supports EMC's Symmetrix and Clariion storage systems, as well as high-end storage from Hewlett-Packard (HP) Co., Hitachi Data Systems (HDS) and IBM.
The limited ability of path management software to work with other storage systems literally pushes users to adopt EMC's Invista or IBM's SVC to virtualize their environments. Though Invista and SVC somewhat undermine EMC's and IBM's push for companies to buy only their storage systems, organizations may end up selecting their network-based storage virtualization products based on the path management software they already have in place.
The impetus behind EMC's and IBM's network-based storage virtualization and SRM software initiatives isn't entirely about virtualizing and managing other vendors' storage systems. Part of the push to virtualize their own storage systems is to make it easier for users to migrate and place data on other vendors' storage systems owned by the user and to then use EMC's or IBM's SRM software for the ongoing end-to-end monitoring and management of the user's virtualized application, data and storage infrastructure.
According to Jim Rymarczyk, IBM fellow and chief virtualization technologist, most of the storage infrastructure costs approximately 10 years ago were associated with the acquisition of storage devices. Now as much as 70% of those costs are associated with management. While virtualization and SRM software can alleviate these management costs, trying to do end-to-end management in a heterogeneous storage network is fraught with problems.
| The most acute problem virtualization creates in heterogeneous storage networks is the inability to map exactly what application data resides on which specific disk. As each layer of virtualization further abstracts the data from the underlying disk to simplify management, information such as storage system RAID levels or controller configurations (needed to optimize performance and troubleshoot problems) is lost. Accessing this information and mapping it back to the application almost always requires access to the storage systems' APIs.
These problems have led EMC and IBM to develop their respective ControlCenter and TotalStorage Productivity Center (TPC) SRM software suites to integrate most tightly with their own hardware and software products. While the EMC and IBM SRM software supports SMI-S and APIs from other storage systems, Rymarczyk admits that "customers will lose some of their freedom to pick other vendors' storage" if they use IBM's SRM software.
EMC and IBM took significantly different approaches to the architectures of their network-based storage virtualization products. EMC's Invista is a split-path architecture that places the storage virtualization code on a management or control path workstation that resides outside of the FC SAN. Virtualization settings are configured on the control path workstation, which then uploads the code into a cache-less FC switch called the data path controller, such as Brocade's AP7420, or a director blade like the Cisco Systems Inc. MDS 9000 Family Storage Services Module (SSM).
Doc D'Errico, VP and general manager of EMC's infrastructure software group, says this "stateless" approach preserves the intelligence on storage systems because storage systems perform other tasks such as replication and data optimization. If all of the intelligence was removed from the storage system and placed on an appliance, users would lose some of the inherent benefits provided by storage systems. Keeping the intelligence on the storage systems also prevents users from making a long-term commitment to network-based virtualization. "Users can transition in and out of network-based virtualization more easily using Invista," claims EMC's D'Errico.
The new 2.0 release of Invista addresses some of the deficiencies of the first release. In the first release, the lack of physical redundancy in its Control Path workstations was considered a potential liability. Invista 2.0 creates a Control Path Cluster (CPC) that's physically separated by FC distances; so if CPC workstation 1 fails, CPC workstation 2 can take over. Invista 2.0 also takes advantage of PowerPath's load-balancing features so it can dynamically load balance between the data path controller and back-end storage systems.
| Both Invista and SVC require new devices to be inserted into the data path. To install the devices, a storage admin needs to halt application processing, physically rearrange the cables of the FC SAN, and then change FC SAN zoning and storage system LUN masking settings to introduce the network-based virtualization software into the data path.
The absence of cache on EMC's Invista presents other longer term challenges. It will minimally delay, if not preclude, Invista from supporting features like asynchronous replication or thin provisioning because these technologies typically rely on cache to work. EMC plans to add thin provisioning to Invista in late 2008, but recommends users adopt its RecoverPoint product if they need to do asynchronous replication between different storage systems.
IBM's SVC more closely resembles a storage system controller because it uses cache in its architecture. The SVC storage virtualization code resides on Linux servers that are deployed in clustered configurations and mirror-write IOs over FC ports between the cache in the clustered pair. It supports four clustered pairs of servers in a logical configuration, with each clustered pair operating independently of the others.
Chris Saul, IBM's SVC marketing manager, recommends users first go through a capacity-planning exercise before implementing SVC. IBM is aware of congestion problems that can arise if users insert SVC into an existing FC SAN fabric and fail to isolate congestion-causing devices like tape from the SVCs. Sometimes the SVC is inserted in a core-to-edge design that puts excessive strain on inter-switch links (ISLs) which, says Saul, "increases the chances of ISL congestion."
For example, Symantec Corp.'s Veritas Storage Foundation provides a common way to virtualize a heterogeneous environment of storage system- or network-based storage virtualization products at the host. By using Veritas Storage Foundation, companies can virtualize and manage their storage in the same way across all of their platforms without needing to learn operating system-specific volume managers. Users can then manage the applications and storage devices using Symantec's Veritas CommandCentral Storage SRM software while gaining additional management benefits on hosts running Veritas Storage Foundation.
| In 2006, Symantec significantly upgraded its Veritas Storage Foundation 5.0 Dynamic Multi-pathing (DMP) path management feature. Prior to Version 5.0, DMP provided only a round-robin path management algorithm; administrators can now choose from seven different algorithms to do path management. Its new default is to look at the minimum queue length on each FC path, identify the least-busy path and then send IO traffic down that path.
DMP 5.0 also improves its error detection and volume discovery by circumventing the OS when managing specific FC paths and communicating directly with the FC HBAs using their APIs. Working with the host bus adapters (HBAs), DMP can identify specific SCSI timeouts or commands issued by storage systems. These are normally received by the FC HBA, but not passed on to the volume managers or OS; however, DMP can spot specific FC path problems or storage system trespass errors, and use alternative paths to access LUNs on back-end storage systems.
DMP can also detect how different storage virtualization products present LUNs to the host. LUNs may be presented by storage controllers in active-active (A/A) or active-passive (A/P) states, which affects how Veritas Storage Foundation's Volume Manager treats them. A/A LUNs are simpler to manage because if the LUN is unavailable on one path, Volume Manager can simply try accessing the LUN on an alternative path.
Conversely, A/P LUNs are assigned to and managed by a specific storage system controller; if that controller becomes unavailable, however, it's not as simple as switching to another path as trespass errors on the storage system can occur. By monitoring FC commands issued by storage system-based or network-based virtualization and received by the FC HBA, DMP can ascertain which alternate path to use to access a LUN without causing a storage system trespass error.
The growth of server virtualization also drove Symantec to port the Veritas Storage Foundation's Veritas Volume Manager (VxVM) directly to the hypervisor level starting with the resource manager on Sun Microsystems Inc. Solaris Logical Domains (LDoms) (see "Improving virtual server backups," next page). By default, the native Sun Solaris LDom resource manager virtualizes the storage volumes and FC HBAs presented to it, which it then re-presents as virtual volumes and HBAs to its virtual hosts.
Symantec's Veritas Storage Foundation Manager, used in conjunction with VxVM, lets administrators manage up to 3,000 hosts from a single Web console, including hosts like the Sun LDom. Also, when using Veritas Storage Foundation with Veritas CommandCentral, storage administrators can map what virtual resources are assigned to what physical resources.
At this point Symantec still has no official timeline as to when integration with VMware's ESX Server will be complete; Sean Derrington, Symantec's director of storage management, says Symantec is still waiting for access to VMware ESX Server's APIs before it can port its VxVM to the ESX hypervisor. Once Symantec has the ESX APIs, "we will be there," says Derrington.
HP has an engineering and OEM relationship with HDS in which the two firms jointly develop the XP and USP V families. Inputs, suggestions, fixes and contributions received from HP are incorporated into a single version of HDS firmware that's then released into all versions of the XP/USP V products. The only differences between the software on the XP and USP V are ID strings embedded into the firmware. "HP uses these IDs to create disaster recovery solutions that work exclusively with the XP," says James Wilson, HP's XP product manager.
Hu Yoshida, CTO at HDS, says his firm has specifically chosen to stay out of the network fabric because it's trying to satisfy specific customer high-availability and performance requirements, and deliver specific functions that network-based appliances can't. Unlike network-based appliances, which lose 50% of their performance should an appliance fail and have a fixed amount or no cache, the USP V can create a cache that spans multiple controllers and processors.
"A fully configured USP V can lose up to four processors and still access the common cache without data loss," says Yoshida.
HDS, HP, NetApp and Sun offer one set of SRM products to manage their own storage systems and another set of SRM tools to manage heterogeneous SAN environments. HP breaks its SRM software into three categories: unified storage and server management, element management and performance management. If you only need to manage storage products from HP, then HP's element management and performance management classifications will likely meet your requirements. Conversely, if a company intends to manage a heterogeneous storage environment, then they'll need to introduce HP's Storage Essentials product to provide this level of server and storage management.
Vendors are moving closer toward tying their storage virtualization and SRM software together. But because of the time and effort required to implement virtualization and complementary SRM software, you should deploy virtualization products gradually while keeping expectations at a modest level.