Managing and protecting all enterprise data


Servers meet storage, virtually

Data centers are being reshaped around virtualization technology. Here's how different virtual server technologies work and how they'll integrate with and affect SANs.

Forward-thinking IT professionals are deploying virtual server technologies in droves. On Intel platforms, virtual machine software from VMware Inc., an EMC Corp. company, Palo Alto, CA, is spreading like wildfire. With sales at approximately $100 million last year, VMware recently announced that it now has 2.5 million users of its software, and 5,500 enterprise server customers worldwide. Microsoft Corp.'s Virtual Server is expected to see wide adoption next year when it is released and the virtual machine market should reach about $800 million in short order, predicts Thomas Bittman, vice president and distinguished analyst at Gartner Inc.

Closer to home, in a survey of Storage magazine readers last month, a whopping 80.3% indicated that they were using, evaluating or planning to evaluate some form of virtual server technology. (See "Snapshot: Are you investigating virtual server technology?")

What do virtual machines have to do with storage? Quite a bit, as many early adopters have discovered. The kind of storage you use can impact the overall success of your virtual server environment. Conversely, virtual servers can help ease your storage management burden, and help you get more out of existing storage resources. They also can simplify disaster recovery greatly. But until storage management software catches up to the virtual server world, you may face some obstacles with routine storage management tasks.

Virtual servers--or virtual machines--are not new. Originally developed for the mainframe, the technology is essentially a way to abstract a server's physical compute and I/O resources, and run several operating systems at the same time. Thus, a single physical server, instead of running a single instance of an operating system, runs a control function on which you load multiple "guest" operating systems. With virtual machines, you can consolidate several small physical servers into a single large server, reducing the number of machines--and operating system licenses--you need to buy, install and maintain. With virtual machine technology, it's also quicker to deploy a new server, a boon for test and development environments or data centers that grapple with dynamic, fluctuating workloads.

Virtual machine capabilities are available in some form or other for all the major server platforms: Logical Partitioning (LPAR) for IBM Corp.'s iSeries and pSeries servers; Sun Microsystems' Dynamic System Domains; and Hewlett-Packard Co. (HP) with vPAR for HP-UX. On the Intel Corp.'s x86 platform, VMware leads the pack. The technology's appeal lies in improving management, reducing server footprint and maximizing resources. "It's a consolidation play, and it's wildly popular," says John Webster, founder and senior analyst at Data Mobility Group, an analyst firm in Nashua, NH.

It's no surprise therefore, to find that many of the vendors promoting virtual server technology have certified configurations that include storage area network (SAN) technology. For example, among its virtual server offerings, Dell Computer Corp. sells a four-processor PowerEdge 6650 server with VMware's ESX Server and a Dell/EMC Clariion SAN. "It's not a requirement, but the marriage of the two makes a lot of sense," says Subo Guha, Dell's director of software product marketing. "It provides for the growth and allows them to consolidate their storage data."

One joint Dell/VMware customer is Oak Associates, a financial services firm in Akron, OH. Last year, the firm embarked on a server migration project, moving away from its aging HP/Compaq DL380s to two-processor Dell 2650s. Of the firm's 110 servers, 80 are virtual machines, with an average of eight virtual machines per physical server.

Oak Associates is running a combination of VMware's GSX Server, which runs on a Windows kernel, and the higher-performing ESX Server, which provides its own Linux-based kernel across 16 servers, with an average of eight virtual machines per physical host. On them, Oak Hill runs Microsoft Windows and a variety of applications, ranging from the company's accounting software to DHCP and domain controllers. In fact, the only applications not hosted by VMware are those that need all the CPU power they can get, such as Microsoft Exchange, says Scott Hill, senior technology officer with the firm.

Better disaster recovery
Of Oak Hill's 16 physical VMware hosts, all but three are connected to the SAN, a decision which has greatly improved Oak Associates' disaster recovery capabilities, Hill says. Its SAN array, an EMC Symmetrix 8830, houses the VM images, and is mirrored to a remote disaster recovery site using Symmetrix Remote Data Facility (SRDF). Because they boot over the SAN, "when we go into DR mode," Hill explains, "all we do is attach to those [virtual] disks and away we go."

The VMware/SAN combo has also allowed Oak
Associates to recoup some of the hardware costs of implementing DR. "Before VMware, we used to have identical hardware at the remote site," Hill says. But now that a server is "just a file" that can run on any Intel hardware, Oak Hill was able to recycle its old Compaq servers for use at the disaster recovery site.

In a similar vein, running applications within virtual machines is allowing Esmond Kane, a systems administrator at an Ivy League university, to run Apache Tomcat, a free public-domain Java applet server "that falls down a lot." When the application fails, Kane simply starts up another virtual machine instance preconfigured for Apache, without having to reboot the entire system. He is running a VMware GSX Server on local disk, and is evaluating EMC Clariion disk arrays.

Easier server provisioning
The combination of virtual machine software--in addition to a SAN--can also dramatically reduce the amount of time it takes to do day-to-day server provisioning. Michael Thomas, a lead infrastructure architect for a federal agency, is another VMware ESX Server customer running it on a mix of approximately 80 one- and two-way IBM and HP blades. Thomas is serving up many different applications: infrastructure applications such as domain controllers and DHCP servers, as well as SQL Server and application servers.

Thomas' blades are connected to arrays from a variety of SAN vendors--EMC, HP, Hitachi Data Systems Inc. (HDS) and Snap Appliance's new Snap Server 15000. Cristie Data Products' Cristie Bare Machine Recovery is used to provision a new server and assign virtual machines. Adding a new virtual machine "is all scripted, and with Cristie, it's pretty easy at this point," Thomas reports. The amount of time it takes to provision a new server "depends on the amount of data loaded," but Thomas puts the worst case at well under an hour.

Simpler SAN management
Virtual machines break the "one application, one server" mentality and let you collapse multiple physical servers into one, says Rob Peglar, CTO at SAN array vendor Xiotech Corp. As a result, the number of server-to-storage connections an administrator needs to manage goes down. When 10 physical servers are collapsed into a single large one, "instead of having ten different zones to configure and manage, you have one," says Peglar.

Having fewer physical connections to manage has economic ramifications too. With fewer servers, you have fewer host bus adapters (HBAs) and switch ports to purchase and manage. And those savings can be applied to putting servers on the SAN that previously could not be cost-justified.

Oak Associates, for one, has seen utilization of the firm's expensive Symmetrix skyrocket since VMware was put in place. "EMC told us that we would get server consolidation benefits out of our SAN, but we never really saw that," says Hill. "Now, with VMware, we are."

Mike Karp, senior analyst with Enterprise Management Associates, an analyst firm in Boulder, CO, thinks you can take virtualization one step further, and use virtualized storage for your virtual servers. "It makes sense. If you're doing one, why not try the other?" With virtualized storage, like virtual servers, "you can squeeze every last nickel of value out of the hardware," Karp says.

Sans SANs
In most VMware environments, SANs are not required, says Raghu Raghuram, VMware's director of product management, although, "we do recommend a SAN for VMotion," which is software that lets you move a virtual machine to a different physical host while it is running. In that case, a SAN allows you "to take individual servers and turn them back into the compute pool."

VMware's indifference to whether or not a SAN is present has a lot to do with VMware ESX Server File System (VMFS), the clustered file system that's part of ESX Server. What VMFS does, Raghuram explains, is "provide a level of indirection between the virtual machines and the actual storage." From a virtual machine's perspective, "all you see is a SCSI disk," while VMFS "takes care of communicating with the SAN" or the local storage.

IBM's view of virtual machines
That's a very different approach than IBM's, where virtual machine technology is architected as a core function of the CPU, as opposed to a kernel-level software layer, and where the file system is a separate component. Virtual machines require a CPU's support for what IBM calls micro-partitioning or dynamic partitioning: A processor can be divvied up in equal increments and assigned to work for a virtual machine.

That's in contrast to logical partitions (LPARs), as they exist on iSeries and pSeries servers, as well as Sun Solaris and HP-UX. With LPARs, it's possible to partition a symmetric multiprocessor (SMP) server's processors "logically" along processor lines. For example, with LPARs, a four-way server can have four logical partitions, explains Jeff Barnett, IBM manager of market strategy for storage software.

This spring, IBM announced its Virtualization Engine (VE) technology, which borrows the mainframe's micro-partitioning concepts, and carries them over to the Power 5 chip, which will ship this year in new iSeries and pSeries servers. Virtual machine technology built on top of VE can run on 10% increments of a processor, and virtual machines can span processor resources. In other words, a four-way server could conceivably have 40 virtual machines on it, and a single virtual machine can consume resources from more than one processor at a time.

VMware builds a file system into its virtual machine server directly, but IBM's approach to virtual machines keeps its clustered file system, SAN File System (SAN FS), separate. While not a requirement of a virtual server environment, "SAN FS would greatly enhance the flexibility of the system," says Barnett, in that it allows you to move data around without affecting the application. In a SAN FS environment, Barnett explains, space isn't physically allocated to individual servers, but rather "dynamically allocated on demand." As a result, virtual machines are able to move around an environment, and unequivocally access their data.

This May, IBM announced SAN FS 2.1, which added support for Solaris and Linux hosts, on top of AIX and Windows. Furthermore, it now supports "any disk environment as long as it adheres to good SCSI protocols," Barnett says. Previously, SAN FS had been limited to IBM Shark, FAStT arrays and those virtualized behind its SAN Volume Controller (SVC).

Other software components of the VE family that IBM has previewed include IBM VE Console, IBM Director Multi-Platform, Tivoli Provisioning Manager for managing workloads and IBM Grid Toolbox, Barnett says.

Management is key
Indeed, it's the availability of comprehensive management tools that makes or breaks a virtual machine environment, and not your storage environment. "What you need to worry about is the level of complexity you start to build, and how you are going to manage it," says Data Mobility Group's Webster. On the mainframe, the availability of comprehensive management functionality for virtual machines has ensured virtual machines' popularity. Virtual machines have been in use for 30 or so years, and users routinely see 80% to 90% utilization rates, says IBM's Barnett.

To achieve that kind of success outside the mainframe environment, management tools must interact with virtual machines the same way they do with traditional servers. Barnett claims that is the case with VE-powered virtual machines. "Virtual machines look like separate physical machines to the outside world," he says, ensuring smooth integration with storage resource management (SRM) tools and backup applications.

On that front, VMware may have more work to do. At least in the context of SRM, what you see is not what you get in a VMware environment, admits VMware's Raghuram. SRM tools still only see the physical component, when ideally, for predictive purposes, SRM tools also need to see the logical virtual machines. That's on VMware's list of things to do, says Raghuram. "Over time, we'd like to integrate into management standards that want to provide visibility into the entire path, from the virtual machine down to the block," which in turn would give administrators "a more granular view of how storage is being allocated" to the various virtual machines.

Performance issues
For some, the fact that VMware has yet to resolve these issues--that it's still immature--is making some organizations pause. Take Savvis Communications, for example. A service provider located in St. Louis, MO, the Savvis infrastructure is state of the art: server blades from Egenera Inc., Marlboro, MA; virtualized network switches from Inkra Networks Corp., Fremont, CA; and the InServ utility storage platform from 3PAR, also in Fremont, CA.

Savvis' issue with VMware is price/performance, says Rob McCormick, chairman and CEO. He admits that most of his customers don't need the performance of the minimum Egenera server blade (two 3GHz Xeon processors) configuration, but that at $3,000 for an ESX Server license, the additional cost does not outweigh the performance hit he would take by running operating systems in a virtual machine environment. Savvis uses VMware internally, McCormick says, for testing and development purposes.

Indeed, managing the performance of your virtual machines seems to be more of a sticking point with customers than anything else. Even with 80 blades running VMware, infrastructure architect Thomas still considers the virtual server environment to be "an emerging technology," where deciding which applications to run on a particular blade is still "more of an art than a science," adding that "not all workloads work well together."

Similarly, Oak Associates' Hill reports that he's had to fiddle with how to eke out decent performance for his SAN environment. One solution he says is to install multiple HBAs in the server--one dedicated to the VMware environment, another for the virtual machines' data traffic. That's not a solution VMware recommended, but a technique he developed over time for a better level of performance. And as users continue to deploy virtual machines and test their limits in production environments, best practices will continue to evolve.

The next frontier
Let's not forget that this is all a work in progress. EMC had many people scratching their heads last year when it first announced that it was acquiring VMware, and since then, the company hasn't publicly divulged a definite roadmap for VMware.

One thing EMC has talked about is its plans to develop what it is calling "Storage Router," VMware-inspired technology that applies to the storage network--an area which has yet to get much attention from virtualization technologies. But that's about to change, according to Howard Elias, executive vice president at EMC. "In our view, you're going to have virtualization at all layers of the stack." Virtualization will be used at the client level for server computing, at the network transport layer and for storage within the array.

Today, Elias continues, the storage network is in many respects still hard-coded. "If you want to add or move a server," he says, "there is configuration and change management that has to occur, and in the majority of cases it is disruptive."

EMC hopes Storage Router will change that. Ultimately, it will result in "more flexibility to move servers in and out of the fabric, for optimal asset utilization and performance, data mobility and migration," Elias says.

EMC and its subsidiaries also have big plans for VMotion to further automate the disaster recovery process. With its ability to move a live server, Legato CTO George Symons says VMotion is a natural for further integration with its Automated Availability Manager (AAM).

It's hard to say exactly what the future holds for virtual machine technology, but one thing is certain: It most definitely has a future. "For me, virtual machines are the way to go," says Oak Associates' Hill. Judging from conversations with his peers, vendors and analysts, Hill is not alone in his thinking.

Article 11 of 18

Dig Deeper on Storage for virtual environments

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All