Virtual servers need a good shared data storage system. All major networked storage protocols work with virtual...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
machines, but some are better than others in certain environments.
Choosing a data storage system to use with virtualized servers is one of the most critical architecture choices you’ll have to make, and one of the most challenging. There are many options available, but there’s no single type of networked storage that’s hands down the best for virtual servers. Each environment is different and what works well for one may not work well for another.
Fibre Channel (FC) has been the traditional choice for virtualization, but iSCSI and network-attached storage (NAS) have become increasingly popular alternatives that can provide good performance for more limited budgets. Let’s look at the characteristics of each networked storage type and review its pros and cons.
Fibre Channel storage
For performance and reliability it’s hard to beat FC storage, but the performance comes at a price in terms of both dollars and complexity. Because of its deep roots in the data center, FC is generally the most popular storage choice for larger virtual environments, based mainly on its speed (currently 8 Gbps with 16 Gbps becoming available) and reliability. FC storage networks tend to be isolated and thus more secure than Ethernet-based storage devices. But Fibre Channel requires special host bus adapters (HBAs) and switches that are more expensive than comparable Ethernet components.
Emerging techs: FCoE, 10 GbE and CNAs
Newer technologies are now available, such as Fibre Channel over Ethernet (FCoE) and 10 Gbps Ethernet (10 GbE), that offer alternative architecture choices while providing big boosts in performance and throughput. FCoE provides encapsulation of a native Fibre Channel (FC) frame into an Ethernet frame to bring together the benefits of FC architecture over an Ethernet infrastructure. FCoE can eliminate the need for costly Fibre Channel hardware. 10 GbE provides a huge speed boost over conventional 1 Gbps Ethernet, but requires network interface cards (NICs) and switches that are specifically designed for 10 Gbps.
FCoE and 10 GbE are directly related to each other as you can only run FCoE on 10 Gbps networks. Converged network adapters (CNAs) combine the two technologies onto a single network adapter, eliminating the need for separate FC and Ethernet adapters. CNAs reduce the number of server adapters, cables and switch ports required, which can help reduce expenses. FCoE, 10 GbE and CNAs are great technologies, but like any new tech they can be expensive to implement for early adopters.
So, implementing a Fibre Channel network from scratch can be costly. Also, FC environments are more complex to implement and manage as their configuration is very different from a traditional network infrastructure. While most companies have staff with network administration skills, many don’t have the same resources for FC storage-area network (SAN) administration. Designing and managing a SAN architecture usually requires specialized training that can further add to the expense of implementation.
Advantages of using FC storage
- Commonly deployed enterprise storage architecture; many environments may have existing SANs
- Typically the best performing storage due to higher available bandwidth
- Isolated FC fabrics are more secure; logical unit number (LUN) zoning and masking can be used to control access
- Able to boot from FC storage (boot from SAN) so local host storage isn’t needed
- Block-level storage that can be used with VMware vSphere VMFS volumes
Disadvantages of using FC storage
- Typically the most expensive storage option to implement from scratch
- Requires specialized and expensive components such as switches, cables and HBAs
- May be complex to implement and manage; typically requires dedicated storage administrators
- Fewer security controls available; authentication and encryption are complicated to implement
If you plan on having many high disk I/O virtual machines (VMs) running on your hosts then you should seriously consider using FC storage for maximum performance. Fibre Channel over Ethernet (FCoE) is also an option that allows you to run FC storage over traditional Ethernet components, but it can be just as expensive to implement as it requires 10 Gbps Ethernet (10 GbE) networking and special switching gear.
If you already have an FC SAN in your environment, then using it with virtualization just makes sense. And expanding an existing SAN is much easier and cheaper than implementing a new one. You really can’t go wrong with FC storage if your budget can afford it and you can handle the management complexity.
iSCSI storage is a popular and solid alternative to Fibre Channel. iSCSI is a block-based storage like FC but it uses traditional Ethernet network components for connectivity between hosts and storage devices. Because you can use existing Ethernet components, iSCSI is often much cheaper to implement. iSCSI works by using a client called an initiator to send SCSI commands over a local-area network (LAN) to SCSI devices (called targets) located on a storage device.
iSCSI initiators can be software or hardware based. Software initiators use device drivers that are built into the hypervisor to use Ethernet network adapters and protocols to write to a remote iSCSI target. Hardware initiators use a dedicated iSCSI HBA that combines a network adapter, a TCP/IP offload engine (TOE) and a SCSI adapter into one device to help improve the performance of the host server. While software initiators work just fine in most cases, hardware initiators offer slightly better I/O performance and use less host resources. You can also boot from hardware initiators; in addition, a new technology called iSCSI Boot Firmware Table (iBFT) will let you boot using a software initiator if the installed network interface card (NIC) and hypervisor support it.
iSCSI performs very well on 1 Gbps Ethernet networks, but switching to 10 Gbps can give it a huge boost and put it on par with (or better than) FC. Most hypervisors support 10 Gbps iSCSI, but the cost may be so high that it will be just as expensive as FC to implement. The biggest risks to using iSCSI are the additional CPU overhead when using software initiators (which can be mitigated with hardware initiators), and the more fragile and volatile network infrastructure it relies on. The latter issue can be addressed by completely isolating iSCSI traffic from other network traffic.
Advantages of iSCSI storage
- Lower cost alternative to FC storage that uses standard Ethernet components; iSCSI storage arrays also tend to cost less than FC arrays
- Software initiators can be used for ease of use and lower cost; hardware initiators offer maximum performance
- Block-level storage (like FC) that can be used with vSphere VMFS volumes
- Speed and performance is greatly increased with 10 Gbps Ethernet
- No special training/skills needed to implement and manage the technology
- Supports authentication (CHAP) and encryption for security, as well as multipathing for increased throughput and reliability
- Can be deployed more quickly than FC
Disadvantages of iSCSI storage
- Because iSCSI is most commonly deployed as a software protocol, it adds to CPU overhead vs. using hardware-based initiators
- Performance is typically less than that of FC SANs
- Typically doesn’t scale as high as FC storage systems
- Network latency and non-iSCSI network traffic can diminish performance
iSCSI also offers more variety and greater flexibility when it comes to choosing data storage devices. You can purchase a range of iSCSI storage products, from small dedicated iSCSI storage devices for less than $2,000 to large enterprise-class devices. Keep in mind that when it comes to performance you generally get what you pay for. If you have a large number of VMs and heavy workloads, you need to spend more for a storage system. iSCSI is a great choice for many companies that want affordability and simplicity. While iSCSI is often criticized for its performance, a dedicated, properly configured iSCSI system can perform nearly as well as a Fibre Channel setup and will be adequate for many environments.
The big difference between iSCSI and NAS (specifically, NFS) is the type of protocol used to write data to the storage device. iSCSI uses a block protocol and data is written in blocks by the virtual host to the storage device. The host server is responsible for maintaining the disk file system on the storage device just as it would with local disk. NAS, on the other hand, uses a file-sharing protocol and the host server simply communicates with the storage device that maintains the disk file system.
I/O virtualization is increasingly being used with server virtualization. It enables a single physical I/O adapter to appear as multiple virtual network adapters (NICs) or host bus adapters (HBAs). One of the challenges with server virtualization is that hosts require a large number of I/O adapters to connect to both data and storage networks. A typical host may have six to eight NICs used for general network connectivity, and at least two NICs or HBAs to connect to storage networks. I/O virtualization lets you consolidate many I/O adapters on a host into a single adapter or two that can handle all I/O requirements. I/O virtualization is implemented in several different ways. Companies like Xsigo Systems Inc. emulate HBAs and NICs over either standard Ethernet or InfiniBand fabrics. Virtensys takes a different approach and uses a PCIe extension adapter to connect to an appliance that contains shared I/O adapters. Both approaches can greatly simplify host I/O connectivity, and reduce hardware costs and host power consumption.
With NAS, you’re essentially offloading the storage device functions responsible for writing data to the drives from the host server to the storage device. NAS uses a special software NFS client built into the hypervisor that uses a network adapter in the host to communicate with the NFS server.
All the major virtualization platforms support using NAS devices for their virtual machines. Because NFS is a widely supported protocol, there are many different options for using NAS storage with your virtual hosts. These can range from taking a standard physical server and converting it into an NAS server, using virtual NAS software or using a dedicated storage appliance. The costs and performance characteristics with each option can vary greatly; dedicated appliances offer the best performance at the greatest cost.
Almost every storage vendor offers a NAS storage device that supports NFS. With block storage devices, allocating storage will consume the full space right away, but with NAS, capacity grows as data is written to it. Regardless of your budget, you can easily find a good NAS device that will meet your requirements.
In most cases, NAS won’t equal the performance of a Fibre Channel SAN, but a properly architected NAS system can meet the performance needs of most workloads. Similar to iSCSI, NAS uses NICs to communicate with storage devices, which may mean a 1 Gbps speed limit, but newer 10 Gbps NICs offer a huge speed increase if you can bear the cost. The performance of NAS is nearly the same as iSCSI. As long as the CPU doesn’t become a bottleneck, the maximum throughput of both iSCSI and NAS is limited by the available network bandwidth.
Advantages of NAS
- Many NAS storage devices use thin provisioning by default, which can help conserve disk space
- File locking and queuing are handled by the NAS device, which can result in better performance vs. iSCSI/FC where locking and queuing are handled by the host server
- NAS doesn’t have a single disk I/O queue like block storage devices, which can result in greater performance; NAS performance is based on the bandwidth of the network connection and the capabilities of the disk array
- Can be less costly to implement than FC storage as it uses standard Ethernet components; NAS arrays tend to cost less than FC arrays
- No special training/skills are needed to implement and manage the technology
- Expanding virtual datastores is done easily by increasing the disk on the NFS server; datastores will automatically increase accordingly
- Snapshots, cloning and so on are done at the file system level instead of the LUN level, which can offer greater flexibility and more granular support
Disadvantages of NAS
- Booting directly from a shared storage device isn’t supported with NAS devices
- There is CPU overhead as the hypervisor must use a software client to communicate with the NAS server
- Some vendors don’t recommend NAS storage for certain sensitive transactional applications due to latency that can occur
- Support for new virtualization features sometimes lags vs. block storage devices
- NAS doesn’t support multipathing from a host to the NAS server; only a single TCP session will be opened to a NAS datastore, which may limit performance
You shouldn’t be discouraged by some of the disadvantages of using NAS, as they may only apply to specific circumstances or result from poorly architected NAS solutions. With a properly sized solution that can handle the VM workloads on your hosts, NAS is usually as good a choice as a block-storage device. In the past, NAS had limited support from virtualization, but it’s now fully supported.
And the winner is . . .
There are a lot of factors to consider when choosing a storage device for your virtual environment, but decisions ultimately come down to simple factors such as budget, performance and capacity. Many storage devices now come with direct integration support for virtualization so this can also be a big factor. VMware vStorage APIs offer many benefits that allow for tighter integration between the storage device and the hypervisor, as well as offload many storage-related tasks from the hypervisor to the storage array.
Another area of concern is support. While Microsoft Hyper-V has pretty broad support for just about any storage array supported by Windows, VMware has a strict hardware compatibility guide that lists all supported storage devices. One reason for this is that VMware has very deep API integration and the guide ensures that storage devices have been tested with vSphere. It also lists the various integration features supported for each array.
While Fibre Channel is a well-established storage platform, don’t be afraid to try iSCSI or NAS devices as more affordable alternatives. With a wide variety of iSCSI and NAS products to choose from, you’ll have to research their capabilities and scalability to ensure that they’ll meet your requirements. Storage is the most critical design decision you’ll make for your virtual environment, so spend the time researching the alternatives to understand the different technologies and features that are available.
BIO: Eric Siebert is an IT industry veteran with more than 25 years of experience who now focuses on server administration and virtualization. He’s the author of VMware VI3 Implementation and Administration (Prentice Hall, 2009) and Maximum vSphere (Prentice Hall, 2010).