Using iSCSI storage with vSphere

To realize the greatest benefits of a vSphere installation, you need networked storage. iSCSI is a good fit for vSphere; here's how to make it work.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Good match: iSCSI and vSphere:

To realize the greatest benefits of a vSphere installation, you need networked storage. iSCSI is a good fit for vSphere; here's how to make it work.

By Eric Siebert

To tap into some of VMware vSphere's advanced features such as VMotion, fault tolerance, high availability and the VMware Distributed Resource Scheduler, you need to have shared storage for all of your hosts. vSphere's proprietary VMFS file system uses a special locking mechanism to allow multiple hosts to connect to the same shared storage volumes and the virtual machines (VMs) on them. Traditionally, this meant you had to implement an expensive Fibre Channel SAN infrastructure, but iSCSI and NFS network storage are now more affordable alternatives.

Focusing on iSCSI, we'll describe how to set it up and configure it properly for vSphere hosts, as well as provide some tips and best practices for using iSCSI storage with vSphere. In addition, we've included the results of a performance benchmarking test for the iSCSI/vSphere pairing, with performance comparisons of the various configurations.

VMware warms up to iSCSI

iSCSI networked storage was first supported by VMware with ESX 3.0. It works by using a client called an initiator to send SCSI commands over a LAN to SCSI devices (targets) located on a remote storage device. Because iSCSI uses traditional networking components and the TCP/IP protocol, it doesn't require special cables and switches as Fibre Channel does.

iSCSI initiators can be software based or hardware based. Software initiators use device drivers that are built into the VMkernel to use Ethernet network adapters and protocols to write to a remote iSCSI target. Some characteristics of software initiators are:

  • Use Ethernet network interface cards (NICs) and native VMkernel iSCSI stack
  • Good choice for blade servers and servers with limited expansion slots
  • Cheaper than using hardware initiators
  • Can be CPU-intensive due to the additional overhead of protocol processing
  • ESX server can't boot from a software-based initiator; ESXi can by using iSCSI Boot Firmware Table (iBFT)

Hardware initiators use a dedicated iSCSI host bus adapter (HBA) that includes a network adapter, a TCP/IP offload engine (TOE) and a SCSI adapter to help improve the performance of the host server. Characteristics of hardware initiators include:

  • Moderately better I/O performance than software initiators
  • Uses less ESX server host resources, especially CPU
  • ESX server is able to boot from a hardware initiator

To find out the advantages and disadvantages of iSCSI storage compared to other storage protocols, see the sidebar "iSCSI pros and cons" below.

iSCSI pros and cons

Here is a summary of the advantages and disadvantages in using iSCSI storage for virtual servers.

iSCSI advantages

  • Usually lower cost to implement than Fibre Channel storage
  • Software initiators can be used for ease of use and lower cost; hardware initiators can be used for maximum performance
  • Block-level storage that can be used with VMFS volumes
  • Speed and performance is greatly increased with 10 Gbps Ethernet
  • Uses standard network components (NICs, switches, cables)

iSCSI disadvantages

  • As iSCSI is most commonly deployed as a software protocol, it has additional CPU overhead compared to hardware-based storage initiators
  • Can't store Microsoft Cluster Server shared LUNs (unless you use an iSCSI initiator inside the guest operating system)
  • Performance is typically not as good as that of Fibre Channel SANs
  • Network latency and non-iSCSI network traffic can reduce performance

iSCSI is a good alternative to using Fibre Channel storage as it will likely be cheaper to implement while providing very good performance. vSphere now supports 10 Gbps Ethernet, which provides a big performance boost over 1 Gbps Ethernet. The biggest risks in using iSCSI are the CPU overhead from software initiators, which can be offset by using hardware initiators, and a more fragile and volatile network infrastructure that can be mitigated by completely isolating iSCSI traffic from other network traffic.

For vSphere, VMware rewrote the entire iSCSI software initiator stack to make more efficient use of CPU cycles; this resulted in significant efficiency and throughput improvements compared to VMware Infrastructure 3. Those results were achieved by enhancing the VMkernel efficiency. Support was also added for the bidirectional Challenge-Handshake Authentication Protocol (CHAP), which provides better security by requiring both the initiator and target to authenticate with each other.

Planning an iSCSI/vSphere implementation

You'll have to make a number of decisions when planning to use iSCSI storage with vSphere. Let's first consider iSCSI storage devices.

You can pretty much use any type of iSCSI storage device with vSphere because the hosts connect to it using standard network adapters, initiators and protocols. But you need to be aware of two things. First, vSphere officially supports only specific models of vendor iSCSI storage devices (listed on the vSphere Hardware Compatibility Guide). That means if you call VMware about a problem and it's related to the storage device, they may ask you to call the storage vendor for support. The second thing to be aware of is that not all iSCSI devices are equal in performance; generally, the more performance you need, the more it'll cost you. So make sure you choose your iSCSI device carefully so that it matches the disk I/O requirements of the applications running on the VMs that will be using it.

There are also some network considerations. For optimum iSCSI performance, it's best to create an isolated network. This ensures that no other traffic will interfere with the iSCSI traffic, and also helps protect and secure it. Don't even think of using 100 Mbps NICs with iSCSI; it'll be so painfully slow that it will be unusable for virtual machines. At a minimum, you should use 1 Gbps NICs, and go for 10 Gbps NICs if that's within your budget. If you're concerned about host server resource overhead, consider using hardware initiators (TOE adapters). If you opt for TOE adapters, make sure they're on VMware's Hardware Compatibility Guide. If you use one that's not supported, there's a good chance vSphere will see it as a standard NIC and you'll lose the TOE benefits. Finally, use multi-pathing for maximum reliability; you should use at least two NICs (not bridged/multi-port) connected to two different physical network switches, just as you would when configuring Fibre Channel storage.

Configuring iSCSI in vSphere

Once your iSCSI environment is set up, you can configure it in vSphere. The method for doing this will differ depending on whether you're using software or hardware initiators. We'll cover the software initiator method first.

Configuring with software initiators: Software initiators for iSCSI are built into vSphere as a storage adapter; however, to use them you must first configure a VMkernel port group on one of your virtual switches (vSwitches). The software iSCSI networking for vSphere leverages the VMkernel interface to connect to iSCSI targets, and all network traffic between the host and target occurs over the NICs assigned to the vSwitch the VMkernel interface is located on. You can have more than one VMkernel interface on a single vSwitch or multiple vSwitches. The VMkernel interface is also used for VMotion, fault-tolerance logging traffic and connections to NFS storage devices. While you can use one VMkernel interface for multiple things, it's highly recommended to create a separate vSwitch and VMkernel interface exclusively for iSCSI connections. You should also have two NICs attached to the vSwitch for failover and multi-pathing. If you have multiple NICs and VMkernel interfaces, you should make sure you bind the iSCSI VMkernel interfaces to the correct NICs. (See VMware's iSCSI SAN Configuration Guide for more information.)

Once the vSwitch and VMkernel interface is configured, you can configure the software iSCSI adapter. Select Configuration/Storage Adapters in the vSphere Client to see the software iSCSI adapter listed; select it and click Properties to configure it. On the General tab, you can enable the adapter and configure CHAP authentication (highly recommended). On the Dynamic Discovery tab, you can add IP addresses to have iSCSI targets automatically discovered; optionally, you can use the Static Discovery tab to manually enter target names. After entering this information, go back to the Storage Adapters screen and click on the Rescan button to scan the device and find any iSCSI targets.

Configuring with hardware initiators: The process is similar for hardware initiators, but they don't use the VMkernel networking, so that step can be skipped. TOE adapters are technically network adapters, but they'll show up on the Storage Adapters screen instead. Select them, click Properties and configure them in a manner similar to software initiators by entering the appropriate information on the General, Dynamic Discovery and Static Discovery tabs. You'll need to assign IP addresses to the TOEs on the General screen as they don't rely on the VMkernel networking.

Once the initiators are set up and your iSCSI disk targets have been discovered, you can add them to your hosts as VMFS volumes. Select a host, click on the Configuration tab and choose Storage. Click Add Storage and a wizard will launch; for the disk type select Disk/LUN, which is for block-based storage devices. (The Network File System type is used for adding file-based NFS disk storage devices.) Select your iSCSI target from the list of available disks, give it a name and then choose a block size. When you finish, the new VMFS data store will be created and ready to use.

Performance testing: iSCSI plus vSphere

It's a good idea to do some benchmarking of your iSCSI storage device to see the throughput you'll get under different workload conditions and to test the effects of different vSphere configuration settings.

Iometer is a good testing tool that lets you configure many different workload types. You can install and run Iometer inside a virtual machine (VM); for best results, create two virtual disks on the VM: one on a local data store for the operating system and another on the iSCSI data store to be used exclusively for testing. Try to limit the activity of other VMs on the host and access to the data store while the tests are running. You can find four prebuilt tests that you can load into Iometer to test both max throughput and real-world workloads at www.mez.co.uk/OpenPerformanceTest.icf.

We ran Iometer tests using a modest configuration consisting of a Hewlett-Packard Co. ProLiant ML110 G6 server, a Cisco Systems Inc. SLM2008 Gigabit Smart Switch and an Iomega ix4-200d iSCSI array.

The test results, shown below, compare the use of the standard LSI Logic SCSI controller in a virtual machine and use of the higher performance Paravirtual SCSI controller. The tests were performed on a Windows Server 2008 VM with 2 GB RAM and one vCPU on a vSphere 4.0 Update 1 host; tests were run for three minutes. The results show the Paravirtual controller performing better than the LSI Logic controller; the difference may be more pronounced when using higher-end hardware.

Click here to get a PDF of the iSCSI Performance Test Results chart.

Best practices for using iSCSI storage with vSphere

Once iSCSI disks have been configured, they're ready to be used by virtual machines. The best practices listed here should help you get the maximum performance and reliability out of your iSCSI data stores.

  • The performance of iSCSI storage is highly dependent on network health and utilization. For best results, always isolate your iSCSI traffic onto its own dedicated network.
  • You can configure only one software initiator on an ESX Server host. When configuring a vSwitch that will provide iSCSI connectivity, use multiple physical NICs to provide redundancy. Make sure you bind the VMkernel interfaces to the NICs in the vSwitch so multi-pathing is configured properly.
  • Ensure the NICs used in your iSCSI vSwitch connect to separate network switches to eliminate single points of failure.
  • vSphere supports the use of jumbo frames with storage protocols, but they're only beneficial for very specific workloads with very large I/O sizes. Also, your back-end storage must be able to handle the increased throughput by having a large number (15+) of spindles in your RAID group or you'll see no benefit. If your I/O sizes are smaller and your storage is spindle-bound, you'll see little or no increase in performance using jumbo frames. Jumbo frames can actually decrease performance in some cases, so you should perform benchmark tests before and after enabling jumbo frames to see their effect. Every end-to-end component must support and be configured for jumbo frames, including physical NICs and network switches, vSwitches, VMkernel ports and iSCSI targets. If any one component isn't configured for jumbo frames, they won't work.
  • Use the new Paravirtual SCSI (PVSCSI) adapter for your virtual machine disk controllers as it offers maximum throughput and performance over the standard LSI Logic and BusLogic adapters in most cases. For very low I/O workloads, the LSI Logic adapter works best.
  • To set up advanced multi-pathing for best performance, select Properties for the iSCSI storage volume and click on Manage Paths. You can configure the Path Selection Policies using the native VMware multi-pathing or third-party multi-pathing plug-ins if available. When using software initiators, create two VMkernel interfaces on a vSwitch; assign one physical NIC to each as Active and the other as Unused; use the esxcli command to bind one VMkernel port to the first NIC and the second VMkernel port to the second NIC. Using Round Robin instead of Fixed or Most Recently Used (MRU) will usually provide better performance. Avoid using Round Robin if you're running Microsoft Cluster Server on your virtual machines.
VMFS volume block sizes

By default, VMFS volumes are created with a 1 MB block size that allows a single virtual disk (vmdk) to be created up to a maximum of 256 GB. Once you set a block size on a VMFS volume, it can't be changed. Instead, you need to move all the virtual machines (VMs) from the volume, and then delete it and recreate it with a new block size. Therefore, make sure you choose a block size that works for your configuration based on current and future needs.

The chart shown here lists the block size choices and related maximum virtual disk size.

Click here to get a PDF of the Block, Virtual Disk Sizes chart.

Choosing a larger block size won't impact disk performance and will only affect the minimum amount of disk space that files will take up on your VMFS volumes. Block size is the amount of space a single block of data takes up on the disk; the amount of disk space a file takes up will be based on a multiple of the block size. However, VMFS does employ sub-block allocation so small files don't take up an entire block. Sub-blocks are always 64 KB regardless of the block size chosen. There is some wasted disk space, but it's negligible as VMFS volumes don't have a large number of files on them, and most of the files are very large and not affected that much by having a bigger block size. In most cases, it's probably best to use an 8 MB block size when creating a VMFS volume, even if you're using smaller volume sizes, as you may decide to grow the volume later on.

iSCSI guides available

VMware provides detailed guides for implementing iSCSI storage for vSphere. Two useful guides available from VMware include the iSCSI SAN Configuration Guide and the iSCSI Design Considerations and Deployment Guide.

BIO: Eric Siebert is an IT industry veteran with more than 25 years of experience who now focuses on server administration and virtualization. He's the author of VMware VI3 Implementation and Administration (Prentice Hall, 2009).

This was first published in August 2010

Dig deeper on ISCSI SAN

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close