This article can also be found in the Premium Editorial Download "IT in Europe: Tips for cost-effective disaster recovery."
Download it now to read this article plus other related content.
NFS in server virtualization
NFS has become increasingly popular for shared storage that's to be shared by multiple virtual hosts. All major server virtualization platforms support the use of NAS NFS storage devices for virtual machines. Because NFS is a widely supported protocol, there are many different options for using NFS storage with your virtual hosts. This can range from taking a standard physical server and converting it into an NFS server, using virtual SAN software or using a dedicated storage appliance. The cost and performance characteristics with each option can vary greatly, but dedicated appliances offer the best performance although at a higher cost. An inexpensive NFS server can be built by putting a bunch of disks in a standard physical server and then loading an operating system like Linux or Windows that has a NFS server, or by using a dedicated storage appliance application like Openfiler open-source shared storage software.
Almost every data storage vendor offers a storage device that supports NFS, including "low-end" devices that support NFS from vendors like NetGear Inc. and Synology Inc. Many storage devices will support both iSCSI and NFS, but allocating storage for iSCSI datastores will consume the full space right away, while with NFS it grows as data is written to it. But with so many devices to choose from, you can easily find a good NFS storage system that will meet your requirements regardless of your budget.
The pros and cons of using NAS
For the most part, NAS storage devices in a virtualized server environment function similarly to block storage devices, but there may be some limitations due to their architecture.• If you don't use local storage on your virtual host and want to boot directly from a shared storage device, you'll need a storage resource other than a NAS system. With Fibre Channel and iSCSI adapters you can boot the hypervisor directly from a shared storage device without using any local storage.
• NFS uses a software client built into the hypervisor instead of a hardware I/O adapter. Because of that, there's CPU overhead as the hypervisor must use a software client to communicate with the NFS server. On a very busy host this can cause degradation in performance as the CPUs are also being shared by the virtual machines.
• In vSphere environments, while you can create VM datastores on NFS devices, they don't use the high-performance VMFS file system. While this doesn't affect the use of most of vSphere's features, you can't use raw device mappings (RDMs) to attach a physical disk directly to a virtual machine.
• Some vendors don't recommend NFS storage for certain sensitive transactional apps (e.g., Exchange and Domino) due to latency that can occur. But there are many factors that figure into this, such as host resources/configuration and the performance of the NFS device you're using. This shouldn't be a problem for a properly sized NFS system.
• NFS doesn't support using multipathing from a host to an NFS server. Only a single TCP session will be opened to an NFS datastore, which can limit its performance. This can be alleviated by using multiple smaller datastores instead of a few larger datastores, or by using 10 Gb Ethernet (10 GbE) where the available throughput from a single session will be much greater. The multipathing constraint doesn't affect high availability, which can still be achieved using multiple NICs in a virtual switch.
This was first published in February 2011