One of the most confusing topics in IT today is storage virtualization. Even though the concept has been around...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
for years in forms such as virtual memory in operating systems, virtual reality, tape virtualization and virtualization using logical volume managers, users continue to be perplexed by the many vendor announcements of "new" virtualization products. At the root of the confusion is the fact that vendors have defined the concept of virtualization in different ways to sell their own products, and then hyped the concept with religious fervor.
"In the last year or so, virtualization has been the most abused and misunderstood word in the computer field," says Thomas Brosnan, director of data center planning at Convergys, a Lake Mary, FL-based billing and customer care company. Brosnan has been using virtualization for several years to manage Convergys' 278TB of storage.
The confusion is compounded by vendors who proclaim that their virtualization solutions are "true virtualization," in the same way that resaligious zealots refer to their god as the only "true god," implying that other peoples' gods are "false gods." In addition to these religious wars, some vendors represent their virtualization solution as a new, free-standing technology instead of what it really is: an enabling technology for the management of heterogeneous storage environments. No wonder there's a cynical attitude toward virtualization (see "Virtualization has a credibility problem").
One reason that virtualization has become such a confusing term is because the software can be deployed in numerous places. For example, virtualization software can reside in the host in direct-attached storage, or several places in the storage area network (SAN): the host server, the storage array, on appliances (PC-like devices) that can be located in the data path - in which case they are called in-band or symmetrical - or outside of the data path - in which case they are called out-of-band or asymmetrical - or in intelligent switches.
Obviously, each vendor says its approach is best. Nick Allen, vice president of storage research at Gartner, a Stamford, CT-based consultancy disagrees. "There is not a best way to do virtualization," he says.
Host-based virtualization, as implemented through logical volume managers, has been around for more than ten years. Veritas Software Corp., Mountain View, CA, introduced its Volume Manager 13 years ago. The Volume Manager software can be implemented on a single host, or in a cluster, in a Unix, or a Windows environment. In both cases, the Volume Manager gives users the ability to virtualize multiple storage arrays - that is, permitting users to aggregate logically into a pool made up of several physical disks.
Veritas' Volume Manager can be used in direct-attached configurations and in SANs. It permits online reconfiguration. For example, a storage manager can move from direct-attached storage to a SAN during production, without disrupting end users.
In addition to Volume Manager for virtualization, Veritas has a broad line of storage management applications and a file system, (VxFS) and has implemented many of these solutions so that the Volume Manager becomes an enabler, permitting storage management in a heterogeneous environment. And the fact that the Veritas' virtualization solution is software-only means that it can be deployed on a variety of storage devices including switches, storage routers, storage arrays and appliances - both in-band and out-of-band - providing block-level and file-level virtualization.
Michael Wojtowicz, manager of systems engineering at Entertainment Partners, a global service company located in Los Angeles, CA, which provides employment compensation and production services to the entertainment industry, has been using Veritas' virtualization software since 1996. When Wojtowicz moved from direct-attached storage in November 2001, the virtualization software saved the day.
"Without Veritas' virtualization software, we would never have been able to install the SAN while production was in progress without our users being aware that the change was being made," he says.
The storage array approach
EMC Corp., Hopkinton, MA, is a leading supplier of virtualization solutions implemented in the storage array. The company maintains that this approach is still one of the best means of implementing virtualization, according to Chuck Hollis, vice president of product marketing. "For many years, with our Symmetrix and Clariion products, we've always thought that the basic ability to take all of the physical storage, carve it up into logical units [LUNs] and freely assign it to attached hosts, should reside in the storage array," Hollis says.
Although virtualization concepts play a role in this effort, it's an enabler - not a major component of the initiative. WideSky, a new middleware layer, is a key part of the AutoIS initiative.
WideSky, which EMC will make available to partners and competitors that agree to participate in the program, will translate commands issued by applications written to it. Specifically, the WideSky program requires a mutual exchange of APIs between EMC and participating vendors. WideSky has been favorably received by several industry analysts and consultants because they see it as a serious commitment by EMC to open storage management. Michael Hogan, general manager of Imation's professional services organization, an independent storage consultancy, sees this move as a radical departure from what EMC has done in the past. "EMC always talks about being a software company, but this is as close as we've seen the company come to acting like one."
Other vendors that are proponents placing virtualization software in the storage array include Hitachi, Compaq, IBM, Sun Microsystems, Xiotech, and Hewlett-Packard.
The in-band (symmetric) approach
Ft. Lauderdale, FL-based DataCore Software offers virtualization software that can be used on an in-band appliance. It differs from Veritas in its architecture - DataCore uses a storage domain server, which the company defines as "a commercial server platform [Windows NT running on an Intel platform] dedicated to the virtualization and allocation of storage to the hosts."
DataCore's SANsymphony 5.0 supplies virtualization to heterogeneous hosts and simultaneously permits users to virtualize any storage array within its SAN. DataCore claims that its GUI provides ease-of-use that differentiates SANsymphony 5.0 from competitive products. For example, the company points out that an administrator can create virtual disks simply with a drag-and-drop motion. DataCore partners with IBM and Fujitsu Softek in the virtualization market.
HP's SANlink was the first virtualization solution to be implemented on an appliance (in-band). The SANlink appliance - an Intel-based PC - resides in the data path between the server and storage devices. SANlink runs SAN OS - an appliance OS - and several applications, including data mirroring and point-in-time copy. Additionally, SANlink provides security through LUN masking/mapping, among different types of arrays. SANlink has the largest market share among the various virtualization appliances, since it was the first such device in the marketplace and began to gain acceptance almost immediately when it was first introduced over two years ago.
FalconStor Software Inc., Melville, NY, offers a software version of an in-band appliance that works on both IP SANs and Fibre Channel SANs. It does an excellent job of pooling, according to Imation.
The out-of-band (asymmetric) approach
Irvine, CA-based StoreAge, offers the only out-of-band (asymmetric) appliance that is shipping today. The heart of the StoreAge solution is the Storage Virtualization Manager (SVM), which runs on an appliance connected to the SAN fabric. The SVM provides the mapping function for virtualization, working with an intelligent agent residing in the host server. Host servers can be Windows NT, Windows 2000, Sun Solaris, Linux, HP-UX, and AIX. The agent retrieves volume metadata from the SVM and permits the server to communicate directly to the storage hardware for read/write operations.
Although it isn't being shipped yet, Palo Alto, CA-based Compaq also has an innovative implementation of storage virtualization called the VersaStor Executor. VersaStor uses intelligent agent technology to virtualize storage. The agent, called a vector, resides in the host. The mapping information permanently resides in the VersaStor Executor. When the virtualization process starts, mapping information is uploaded into the vector, where it's cached. Virtualization commands are sent to the vector in the host from the VersaStor Executor software that resides in the appliance. Then, the vector executes the commands. The VersaStor Executor uses asymmetrical virtualization in its implementation. This approach basically means that the appliance isn't in the data path of the SAN. By choosing to do things this way, Compaq claims that it eliminates any single point of failure, and doesn't introduce latency into the data path.
In contrast to some other implementations, the VersaStor Executor uses policy management to simplify administration and guarantee Quality of Service (QoS). Compaq implements policy management by allowing users to specify attributes such as capacity, server availability, data availability, RAID protection level, performance characteristics and storage pool hierarchy location. Then during the virtualization process, the VersaStor Executor software automatically distributes the physical blocks based on the specified attribute information. The VersaStor Executor is presently in the alpha test stage of development and will be introduced later in 2002.
The storage router
Vicom Systems, Fremont, CA, offers a different implementation - it uses a router as the hardware platform rather than a PC or a server. Vicom claims that its Storage Virtualization Engine (SVE) gives users greater scalability and better performance than engines implemented on PCs or server-based appliances.
Vicom's SV Router is a processing platform that is distributed throughout a SAN between an individual server and multiple storage devices. The intelligence of the virtualization system, in the form of firmware, resides in the router. Sun Microsystems is using Vicom's SV router in its StorEdge 6900 to provide virtualization.
Waiting in the wings
IBM is currently working on solutions for block-level and file-level virtualization, according to Mike Zisman, director of the software business unit of the software systems group. The file-level solution, called Storage Tank, is a few months away from release - the block-level version is a little further down the line. IBM says it will handle the virtualization solutions and will look to its Tivoli subsidiary and other partners for storage applications that will be integrated with the virtualization tools. The block-level solution will be an in-band appliance packaged as part of a SAN operating system, according to Zisman.
Another interesting offering from TrueSAN is Cloudbreak, a suite of storage management software products that implement virtualization as a symmetric solution and features policy-based management. Cloudbreak is in the beta phase of development.
The future of virtualization
The consensus among storage networking mavens is that the next step in the evolution of virtualization will be to move the storage virtualization software onto an intelligent switch. The early players in the area will be San Jose, CA-based Brocade Communications, with its Fibre Channel switches; Pirus Networks, Acton, MA, with its multiprotocol PSX-1000 switch; and Veritas, which has been working closely with Brocade in this area. The advantages claimed for this approach are better security, performance that's far superior to other approaches - symmetric and asymmetric - and a better ability to collect management data to monitor the health of the SAN (see "SAN switches get smarter").
Another player is McData's fabric virtualization. McData, Broomfield, CO, defines this technique as "virtualizing the fabric" as opposed to the physical storage, according to Brendan Hoff, senior manager of strategic marketing. More specifically, "it is the ability to perform route management, performance-based auto-provisioning and QoS services on-the-fly in a networked storage environment." Still confused? It's OK to admit it. Virtualization is a very confusing concept. But help is on the way: vendors are starting to get the message and tone down their virtualization hype. They are beginning to talk about how virtualization can help solve a user's storage management problems rather than the technique itself. Stay tuned.