Best storage for virtual servers: Pros and cons of FC, iSCSI and NAS


This article can also be found in the Premium Editorial Download "Storage magazine: FC, iSCSI, NAS: How to choose storage for virtual servers."

Download it now to read this article plus other related content.


The big difference between iSCSI and NAS (specifically, NFS) is the type of protocol used to write data to the storage device. iSCSI uses a block protocol and data is written in blocks by the virtual host to the storage device. The host server is responsible for maintaining the disk file system on the storage device just as it would with local disk. NAS, on the other hand, uses a file-sharing protocol and the host server simply communicates with the storage device that maintains the disk file system.

Requires Free Membership to View

With NAS, you’re essentially offloading the storage device functions responsible for writing data to the drives from the host server to the storage device. NAS uses a special software NFS client built into the hypervisor that uses a network adapter in the host to communicate with the NFS server.

All the major virtualization platforms support using NAS devices for their virtual machines. Because NFS is a widely supported protocol, there are many different options for using NAS storage with your virtual hosts. These can range from taking a standard physical server and converting it into an NAS server, using virtual NAS software or using a dedicated storage appliance. The costs and performance characteristics with each option can vary greatly; dedicated appliances offer the best performance at the greatest cost.

Almost every storage vendor offers a NAS storage device that supports NFS. With block storage devices, allocating storage will consume the full space right away, but with NAS, capacity grows as data is written to it. Regardless of your budget, you can easily find a good NAS device that will meet your requirements.

In most cases, NAS won’t equal the performance of a Fibre Channel SAN, but a properly architected NAS system can meet the performance needs of most workloads. Similar to iSCSI, NAS uses NICs to communicate with storage devices, which may mean a 1 Gbps speed limit, but newer 10 Gbps NICs offer a huge speed increase if you can bear the cost. The performance of NAS is nearly the same as iSCSI. As long as the CPU doesn’t become a bottleneck, the maximum throughput of both iSCSI and NAS is limited by the available network bandwidth.

Advantages of NAS

  • Many NAS storage devices use thin provisioning by default, which can help conserve disk space
  • File locking and queuing are handled by the NAS device, which can result in better performance vs. iSCSI/FC where locking and queuing are handled by the host server
  • NAS doesn’t have a single disk I/O queue like block storage devices, which can result in greater performance; NAS performance is based on the bandwidth of the network connection and the capabilities of the disk array
  • Can be less costly to implement than FC storage as it uses standard Ethernet components; NAS arrays tend to cost less than FC arrays
  • No special training/skills are needed to implement and manage the technology
  • Expanding virtual datastores is done easily by increasing the disk on the NFS server; datastores will automatically increase accordingly
  • Snapshots, cloning and so on are done at the file system level instead of the LUN level, which can offer greater flexibility and more granular support

Disadvantages of NAS

  • Booting directly from a shared storage device isn’t supported with NAS devices
  • There is CPU overhead as the hypervisor must use a software client to communicate with the NAS server
  • Some vendors don’t recommend NAS storage for certain sensitive transactional applications due to latency that can occur
  • Support for new virtualization features sometimes lags vs. block storage devices
  • NAS doesn’t support multipathing from a host to the NAS server; only a single TCP session will be opened to a NAS datastore, which may limit performance

You shouldn’t be discouraged by some of the disadvantages of using NAS, as they may only apply to specific circumstances or result from poorly architected NAS solutions. With a properly sized solution that can handle the VM workloads on your hosts, NAS is usually as good a choice as a block-storage device. In the past, NAS had limited support from virtualization, but it’s now fully supported.

And the winner is . . .

There are a lot of factors to consider when choosing a storage device for your virtual environment, but decisions ultimately come down to simple factors such as budget, performance and capacity. Many storage devices now come with direct integration support for virtualization so this can also be a big factor. VMware vStorage APIs offer many benefits that allow for tighter integration between the storage device and the hypervisor, as well as offload many storage-related tasks from the hypervisor to the storage array.

Another area of concern is support. While Microsoft Hyper-V has pretty broad support for just about any storage array supported by Windows, VMware has a strict hardware compatibility guide that lists all supported storage devices. One reason for this is that VMware has very deep API integration and the guide ensures that storage devices have been tested with vSphere. It also lists the various integration features supported for each array.

While Fibre Channel is a well-established storage platform, don’t be afraid to try iSCSI or NAS devices as more affordable alternatives. With a wide variety of iSCSI and NAS products to choose from, you’ll have to research their capabilities and scalability to ensure that they’ll meet your requirements. Storage is the most critical design decision you’ll make for your virtual environment, so spend the time researching the alternatives to understand the different technologies and features that are available.

BIO: Eric Siebert is an IT industry veteran with more than 25 years of experience who now focuses on server administration and virtualization. He’s the author of VMware VI3 Implementation and Administration (Prentice Hall, 2009) and Maximum vSphere (Prentice Hall, 2010).

This was first published in September 2011

There are Comments. Add yours.

TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: