Looking for something else?
While the Xen hypervisor remains open-source, the company that became its owner in August 2007, Citrix Systems Inc., also markets its own product based on Xen hypervisor, called Citrix XenServer. (Linux vendors such as Red Hat also offer versions of Xen).
In general, every server virtualization software product competes most against VMware. But when it comes to proprietary management frameworks for the open-source hypervisor, Citrix's biggest competitor is Virtual Iron Software Inc.
Virtual Iron has two means of connecting to storage: natively or with Linux-based logical volume management (LVM) VI licenses from Novell Inc. In either case, the virtual machine sees storage as the local C: drive; what storage it's attached to is abstracted at the hypervisor level. Connections to LUNs are set in Virtual Iron's management console. Each virtual server requires 2 CPUs, 512 MB of RAM and two disks, which can be either 10 GB or 50 GB.
According to Chris Barclay, director of product management for Virtual Iron, about half of Virtual Iron users deploy with native storage; and half use LVM. "It can be time-consuming for the virtual machine to be asking for storage each time, and many users either partner with an existing storage company or have their own volume manager already in use," he said. "Using their existing volume management, some users just create a large LUN [for virtual machines] and partition it up themselves."
Others, he said, prefer to get help from Virtual Iron for volume management, especially if they create or delete virtual machines often or if the storage and server teams don't work closely together. Even then, however, the company is careful to emphasize that this function is not part of the Hypervisor.
In either case, Virtual Iron points out that users' existing data protection and disaster recovery plans can be the same as they've always been. Virtual Iron's CEO Ed Walsh has promoted this value proposition at industry events, claiming that VMware's proprietary file system "breaks' users" existing storage management and data protection infrastructures.
Live migration of running virtual machines is a part of the hypervisor, Barclay said, but is not based on a virtual file system. (Many VMware users are still under the impression that a virtual file system is necessary for live migration . . .and until recently, it has been.) Shared storage is needed, but otherwise, the work is mostly in transferring memory changes between servers and quiescing internal applications, neither of which require a file system.
Storage intelligence being moved into server virtualization platform?
However, using VMFS is still a very popular approach. "People kind of get file systems—that's why companies also like NFS [for server virtualization]," Barclay said. But some observers fear that making virtual-machine storage file-based could move storage intelligence up into the server virtualization platform.
That might be preferable for users, Barclay acknowledges, but warns that it can be "a great thing to have one company to go to [for multiple different features], but we've been through it with Microsoft." One drawback to a less competitive market is potentially less competitive pricing. "We know that when you work with one company, you kiss that company's ring more often than you'd like."
David Roden, director of technology for law firm Goodell, DeVries, Leech, and Dann LLP, is two months into a project to roll out Virtual Iron servers on three physical hosts. So far, eight physical servers have been virtualized, with another 12 planned in the company's 6 TB environment.
Roden heard about Virtual Iron from a VAR, but also evaluated Citrix XenServer and VMware. He said his firm chose Virtual Iron on price, but he was also impressed by Virtual Iron's compatibility with storage products during evaluation testing.
"Integration with storage seemed to be the cause of most problems with server virtualization," he said. "Along with memory, storage I/O is one of the two areas that most affect virtual machine performance. We found Virtual Iron to be as close to plug-and-play as any."
The firm's decision to go with a NetApp StoreVault disk array was also based on better pricing and flexibility. "LeftHand [Networks'] product did what we needed, but it was more expensive," Roden said. "NetApp did what we needed to do as inexpensively as we could get, and also did things we knew we'd need later, like replication and snapshots."
The firm passed over XenServer when it had problems getting it to run with storage systems during testing, and also rejected VMware because its installation seemed more complex. There are a few items on its wish list when it comes to Virtual Iron, such as scheduled snapshot export within the VI management infrastructure.
"I readily admit that VMware probably does more [than Virtual Iron] and has a lot more third-party support," Roden said. "But none of it was essential, and it seemed like there were more things that we'd have to manage, like [VMware] Consolidated Backup, to support it."
Virtual Iron and Citrix: one hypervisor, two approaches to storage
Although Virtual Iron and Citrix use the same hypervisor, the similarities end there, according to Simon Crosby, CTO of virtualization management for Citrix. Citrix, he said, emphasizes channel partners and integration between XenServer and storage partners.
Another difference, according to Crosby, is that Virtual Iron's raw disk access method is block-based; XenServer can offer either 'file-backed' or 'block-backed' access. "Because we own the hypervisor, we can do much more integration and development around it—Virtual Iron is just a consumer of it," he said.
While data on virtual disks can be represented to the host in files or blocks, under the covers XenServer assigns one LUN in the storage system per virtual disk image, and uses the storage system's existing features to back up, snapshot and clone those volumes.
This is also Citrix's principal differentiation from VMware. "VMware assumes storage systems are dumb and will remain dumb providers of blocks," Crosby said. "Why should users go through VCB when they already have backup tools? Why should they go through VMWare's DR [console] when they already have DR, and it's not managed in the [server] group?"
Furthermore, according to Crosby, a clustered file system "is absolutely the wrong level of granularity and locking" for offering multiple hosts access to the same data, relying on the creation of a "big fat LUN" to accommodate the file system, and then "hiding" the virtual machine from the storage infrastructure. "Clustered file system failure semantics are also horrendous," he added.
The reliance on storage systems for such functions, however, presents obvious challenges, such as the requirement that XenServer do API integration with each of its storage partners. So far, XenServer has announced partnerships with just two storage vendors, Symantec and NetApp, and the Symantec integration has yet to hit the market (though Symantec's Veritas NetBackup 6.5 does support Citrix XenServer backup).
"We didn't include the Symantec software in Xen version 4.1 even though we said we would last year," acknowledged Crosby. However, he said delaying the integration would make the eventual offering more powerful. "Symantec is certified with just about every disk array on the planet," he said. Integration with more storage vendors, including LeftHand, Dell/EqualLogic and Reldata, have since followed, while the Symantec integration is still waiting in the wings.
The XenServer Adapter for NetApp Data OnTap, announced March 31, uses NetApp's Manage OnTap API to interface with any NetApp storage device from the XenServer management console, allowing virtual server administrators to provision storage, schedule snapshots and manage replication. The adapter supports block and file storage on NetApp devices and is also compatible with NetApp's recently announced Provisioning Manager and Protection Manager storage management automation software.
One NetApp customer working with VMware has concerns about the way XenServer integrates with storage on the back end. VMware pools virtual machine files in its virtual file system, while XenServer assigns one LUN to each virtual machine. "A lot of LUNs is a pain to manage [and] boot times are dramatically increased with a large number of LUNs," said Tom Becchetti, storage engineer for a large medical manufacturing company. Becchetti said he was also looking for server virtualization vendors to offer I/O prioritization for virtual machine files or LUNs. However, Becchetti also said he will begin testing NetApp NFS with VMware soon.
Another NetApp user, Jim Corrigan, president and CEO of ManageOperations, a remote monitoring service provider for large companies, said his company never considered VMware. "We were one of their first customers in 1998 and the experience for some of our admins was pretty negative at that time," he said. "Some of our technical guys remembered that." The last time the company used VMware was in 2000.
The company tried Red Hat's version of the Xen hypervisor but had difficulty getting it up and running even in tests. Corrigan said he appreciated that Citrix had laid the groundwork for integrating with his storage vendor so he wouldn't have to. XenServer's LiveMotion was also a plus. "We're not experts in virtual storage, and we're not command-line experts," he said. "This was a no-brainer for us."