Sergey Nivens - Fotolia

Using NAS for virtual machines

Common wisdom says you need block storage for virtual servers; but with most hypervisors supporting the NFS protocol, NAS may work just as well.

Common wisdom says you need block storage for virtual servers, but with most hypervisors supporting the NFS protocol, NAS may work just as well.

By Eric Siebert

Shared storage is a requisite for virtualized servers if you want to use any of the advanced features server virtualization offers, such as high availability or the ability to move a running virtual machine (VM) from one host to another. This typically meant you had to invest in an expensive Fibre Channel SAN (FC SAN). But all server virtualization products also support using network-attached storage (NAS) devices, which can provide a worthy, cost-effective alternative to FC SANs for shared storage.

Another alternative is iSCSI storage which, like NAS, uses TCP/IP over a standard Ethernet network, but iSCSI is block storage like Fibre Channel and tends to be costlier than NAS. NAS systems generally support both NFS and CIFS file-sharing protocols, but server virtualization products prefer -- or are limited to -- NFS.

Inside NFS

NFS was developed in the late 1980s and has been revised several times over the years; NFS Version 4 (NFSv4) is the most recent version. The NFS architecture consists mainly of three components:

• Remote procedure calls (RPCs)
• External data representation (XDR)
• NFS procedures

The NFS protocol uses an RPC system that allows a server (NFS client) to make a call that's executed on another server (NFS server). XDR is the data encoding standard for NFS and serves as the universal language used between clients and servers. NFS procedures are software instructions used to perform storage-related tasks.

An NFS server may be a dedicated NAS appliance such as those sold by Net-App and all major storage vendors, or it can be a common server running an operating system. NFS is commonly used in Unix and Linux systems, but is also available with other operating systems such as Windows. NFS is a stateless protocol, which means the server doesn't store any client information and each RPC event contains all the necessary information to complete the call. In this manner, no open connections between the client and server exist, and crash recovery is as simple as having the client resend requests until the server responds.

NFS in server virtualization

NFS has become increasingly popular for shared storage that's to be shared by multiple virtual hosts. All major server virtualization platforms support the use of NAS NFS storage devices for virtual machines. Because NFS is a widely supported protocol, there are many different options for using NFS storage with your virtual hosts. This can range from taking a standard physical server and converting it into an NFS server, using virtual SAN software or using a dedicated storage appliance. The cost and performance characteristics with each option can vary greatly, but dedicated appliances offer the best performance although at a higher cost. An inexpensive NFS server can be built by putting a bunch of disks in a standard physical server and then loading an operating system like Linux or Windows that has a NFS server, or by using a dedicated storage appliance application like Openfiler open-source shared storage software.

Almost every data storage vendor offers a storage device that supports NFS, including "low-end" devices that support NFS from vendors like NetGear Inc. and Synology Inc. Many storage devices will support both iSCSI and NFS, but allocating storage for iSCSI datastores will consume the full space right away, while with NFS it grows as data is written to it. But with so many devices to choose from, you can easily find a good NFS storage system that will meet your requirements regardless of your budget.

Because NFS is a file-level protocol, it's configured in a different manner than block storage devices. With block storage devices you have a storage I/O adapter in the host that communicates with the storage device either locally or remotely. This would typically be a SCSI or Fibre Channel adapter, or with iSCSI, a network adapter that serves as either a hardware or software initiator. With NFS you use an NFS client built into the hypervisor that uses a network adapter in the host to communicate with the NFS server. Instead of scanning for storage devices on your I/O adapters as you would with block devices, you simply enter an NFS server name and folder location when adding an NFS storage device to a virtual host. Once you have your NFS datastores configured, you create virtual machines on them just like you would with block storage devices.

The pros and cons of using NAS

For the most part, NAS storage devices in a virtualized server environment function similarly to block storage devices, but there may be some limitations due to their architecture.

• If you don't use local storage on your virtual host and want to boot directly from a shared storage device, you'll need a storage resource other than a NAS system. With Fibre Channel and iSCSI adapters you can boot the hypervisor directly from a shared storage device without using any local storage.
• NFS uses a software client built into the hypervisor instead of a hardware I/O adapter. Because of that, there's CPU overhead as the hypervisor must use a software client to communicate with the NFS server. On a very busy host this can cause degradation in performance as the CPUs are also being shared by the virtual machines.
• In vSphere environments, while you can create VM datastores on NFS devices, they don't use the high-performance VMFS file system. While this doesn't affect the use of most of vSphere's features, you can't use raw device mappings (RDMs) to attach a physical disk directly to a virtual machine.
• Some vendors don't recommend NFS storage for certain sensitive transactional apps (e.g., Exchange and Domino) due to latency that can occur. But there are many factors that figure into this, such as host resources/configuration and the performance of the NFS device you're using. This shouldn't be a problem for a properly sized NFS system.
• NFS doesn't support using multipathing from a host to an NFS server. Only a single TCP session will be opened to an NFS datastore, which can limit its performance. This can be alleviated by using multiple smaller datastores instead of a few larger datastores, or by using 10 Gb Ethernet (10 GbE) where the available throughput from a single session will be much greater. The multipathing constraint doesn't affect high availability, which can still be achieved using multiple NICs in a virtual switch.

Despite the limitations, there are some good reasons why you might prefer a NAS system over block storage devices.

• Many NFS storage devices use thin provisioning by default, which can help conserve disk space because virtual disks don't consume the full amount of space they've been allocated.
• File locking and queuing are handled by the NFS device, which can result in better performance vs. iSCSI/FC where locking and queuing are handled by the host server.
• NFS doesn't have a single disk I/O queue like a block storage device has, so you may get better performance. The performance of NFS is based on the size of the network connection and the capabilities of the disk array.
• Implementing NAS costs a lot less than traditional FC storage. NAS devices require only common NICs instead of expensive HBAs, and use traditional network components rather than expensive FC switches and cables.
• Because NAS takes away a lot of the complexity of managing shared storage, specialized storage administrators aren't necessary in most cases. Managing files on an NFS server is much easier than managing LUNs on a SAN.
• Virtual datastores can be expanded easily by simply increasing the disk on the NFS server; there's no need to increase the size of datastores as they'll automatically increase accordingly.
• Operations like snapshots and cloning are done at the file system level instead of at the LUN level, which can offer greater flexibility and more granular support.

The advantages to using NAS are many and you shouldn't be discouraged by the disadvantages that mainly apply to specific circumstances or with lower quality NAS products. With a properly sized and designed system that will handle the VM workloads on your hosts, NAS can be as good a choice as any block storage device.

Is NAS performance enough?

Many IT shops considering NAS as an alternative to block storage for their virtual servers are concerned about performance, and with good reason. In most cases, NAS performance won't equal that of an FC SAN, but a properly architected NFS solution can easily meet the performance needs of most workloads.

Some users end up comparing iSCSI to NAS as they're both low-cost alternatives to FC storage and they can each use existing Ethernet infrastructure. VMware Inc. has published test results comparing the performance of virtual machines on NAS, iSCSI and FC storage devices. The results show that the performance of NAS vs. both hardware and software iSCSI is nearly identical. As long as the CPU doesn't become a bottleneck, the maximum throughput of both iSCSI and NFS is limited by the available network bandwidth. Software iSCSI and NFS are both more efficient than Fibre Channel and hardware iSCSI at writing smaller block sizes (fewer than 16 KB), but with larger blocks more CPU cycles are used, which makes software iSCSI and NFS less efficient than hardware iSCSI and Fibre Channel. The CPU cost per I/O is greatest with NFS; it's only slightly higher than iSCSI, but much higher than hardware iSCSI and FC, but on a host with enough spare CPU capacity this shouldn't be an issue.

Achieving the best performance with NAS comes down to several factors; the first is having enough CPU resources available so the CPU never becomes a bottleneck to NFS protocol processing. It's easy enough to achieve by simply making sure you don't completely overload your virtual host's CPU with too many virtual machines. Unfortunately, there's no way to prioritize or reserve CPU resources for NFS protocol processing, so you need to make sure you adjust your workloads on your hosts accordingly and monitor CPU usage. Using a technology like VMware's Distributed Resource Scheduler will help balance CPU workloads evenly across hosts.

The second factor is network architecture; the performance of NAS storage is highly dependent on network health and utilization. You should isolate your NAS traffic on dedicated physical NICs that aren't shared with virtual machines. You should also ensure that you use a physically isolated storage network that's dedicated to your hosts and NFS servers, and isn't shared with any other network traffic. Your NICs are your speed limit; 1 Gbps NICs are adequate for most purposes, but to take NFS to the next level and experience the best possible performance, 10 Gbps is the ticket. There are a number of network configuration tweaks you can use to boost performance, as well as technology like jumbo frames.

The final factor in NFS performance is the type of NAS storage device you're connected to. Just like any storage device, you must size your NAS systems to meet the storage I/O demands of your virtual machines. Don't use an old physical server running a Windows NFS server and expect to meet the workload demands of many busy virtual machines. Generally, the more money you put into a NAS product the better performance you'll get. There are many high-end NAS systems available that will meet the demands of most workloads.

NAS has its niche

NAS might not be appropriate for every virtualized server environment -- for certain workloads only a FC SAN will do -- but it's certainly attractive and effective for most use cases. In past years, NAS wasn't a viable alternative because of limited support by virtualization vendors, but that has changed and NFS is now fully supported. NFS has also matured and improved in all areas, including in the hypervisor, on the network and in the storage device to become a solid storage platform for virtualization.

BIO: Eric Siebert is an IT industry veteran with more than 25 years of experience who now focuses on server administration and virtualization. He's the author of VMware VI3 Implementation and Administration (Prentice Hall, 2009) and Maximum vSphere (Prentice Hall, 2010).

Dig Deeper on Storage architecture and strategy

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close