Today, few consolidation initiatives are complete without virtualization -- the use of software to create a layer of abstraction between hardware and applications. Virtualization allows organizations to handle more work with less equipment. With virtualization, a physical server can be segregated into logical servers to make better use of available CPU, memory and I/O resources.
Virtualization can also pool storage from across the data center, allowing disks from diverse storage platforms to appear as a single storage resource. This storage resource can then be allocated, provisioned, migrated, replicated and backed up without regard to where it is physically located. Not only does virtualization make storage easier to track and manage, it also makes better use of available storage, forestalling the expense of unnecessary disk or storage platform purchases.
Still, virtualization has its tradeoffs.
- Organizations need to contend with another layer of software.
- The software needs to be compatible across the infrastructure and maintained as patches and updates are released.
- Virtualization must scale without impairing performance.
- Storage practices such as backups and replication will need to be modified to accommodate a virtualized environment.
SearchStorage.com has already covered the issues involved in purchasing strategies for a storage consolidation project. Here is a list of the criteria for purchasing virtualization software from vendors such as DataCore Software Corp., EMC Corp., IBM, Symantec Corp., VMware and others.Where will the software be running? Storage virtualization can be implemented with host-based, array-based or fabric-based products. Host-based virtualization runs software, such as Veritas Storage Foundation from Symantec Corp., installed on host servers. Dedicated appliances, such as the Model 765 Intelligent Services Platform from Emulex Corp., offer hardware acceleration for network-based virtualization software. Host-based products are the least costly and easiest to deploy, but they are the least scalable.
Fabric-based virtualization runs software on intelligent switch devices such as Cisco directors. Fabric-based storage virtualization typically promises the greatest level of heterogeneity and scalability, but may require a new switch in the infrastructure.
Array-based virtualization integrates the technology in the storage array itself, such as a TagmaStore array from Hitachi Data Systems. However, array-based virtualization typically uses software from the storage vendor, and is generally not heterogeneous across different storage systems.Interoperability with your current infrastructure. Interoperability is a critical consideration for virtualization technology. A virtualization product should accommodate all your existing storage hardware, as well as meet the demands of new future storage systems. Storage systems that the virtualization product does not support will often remain in service, relegated to secondary storage tasks. Unfortunately, non-virtualized storage "islands" tend to fall into disuse, and this wastes the valuable space that virtualization tries to organize. Vendor support matrices are a good place to start evaluating interoperability, but in-house testing can also be used.
What supports the virtualization software? To support the virtualization software itself, you'll need host device drivers, path managers, agents and shims. IT staffers can become bogged down patching and updating a proliferation of storage virtualization servers when hardware is replaced or new versions become available. Not paying enough attention to maintenance can result in version disparity, leading to instability and performance problems. Evaluate any storage virtualization product from a management and maintenance perspective, and determine if the problems that it solves outweigh the new issues that it introduces.
How scalable is the virtualization layer? A virtualization product can only manage x amount of storage, and storage performance may suffer as the amount of storage grows. You should understand the tradeoff between scale and performance, espcially because many virtualization initiatives begin as test or pilot deployments before being deployed throughout the enterprise. Scaling issues may not appear until later in the deployment cycle. Consider scaling right from the start to help identify unacceptable products.
How will your storage processes change? The goal of storage virtualization is to consolidate a variety of storage resources into a single ubiquitous pool. This will invariably change the way that storage is organized, provisioned, migrated and protected. For example, a virtualization product may provide automatic provisioning, which may be a significant change for the IT organization. Storage administrators will also need to change backup or replication targets once virtualization is in place. This is another area where lab testing and vendor support can prevent problems before they start.
Deploy storage virtualization in phases. Deploying virtualization across the entire enterprise at once is risky. Perform a thorough lab evaluation of any storage virtualization product up-front. This should include a review of decommissioning drills. Once you decide what to purchase, you can start implementation on a small scale before building out the virtualization systematically. This conservative approach gives administrators ample time to get accustomed to virtualization management, and prevents unforeseen problems from crippling an entire data center.
Examine resource management features. Storage virtualization products incorporate a growing range of resource monitoring and management features. For example, a storage virtualization tool can see every storage I/O, allowing the tool to track disk use, to view performance and to monitor path configuration. Few virtualization products offer the range and sophistication of features found in full SRM packages, but users can gain insights into resource management and automation without committing to yet another piece of sophisticated software.
Can one undo virtualization? Performance issues, scalability limitations and interoperability problems are all reasons for decommissioning a storage virtualization product. An organization may also decide to discontinue one product in favor of a better one. Unfortunately, there is no simple way to undo virtualization once it's been deployed. Eliminating a problematic virtualization deployment is disruptive and time-consuming. Discuss any back-out options with the vendor before committing to a virtualization product.
The following cross-section of storage virtualization offerings was selected based on input from industry analysts and SearchStorage.com editors. The specifications, which were provided by the vendors, are current as of January 2008. They are periodically upgraded. Vendors are welcome to submit their updates and new product specifications to SearchStorage.com editors. The virtualization software product specifications page in this chapter covers the following products:
Dig Deeper on Storage virtualization
Fusion-io IPO, Violin Memory funding gets cash for solid-state vendors
Storage resource management software trends: Affordability, ease of use and extended support
Storage roundup: FalconStor rolls out HyperFS file system for broadcast
Choosing data deduplication products: Hardware and software offerings