Feature

Using iSCSI storage with vSphere

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Good match: iSCSI and vSphere."

Download it now to read this article plus other related content.

Configuring iSCSI in vSphere

Once your iSCSI environment is set up, you can configure it in vSphere. The method for doing this will differ depending on whether you're using software or hardware initiators. We'll cover the software initiator method first.

Configuring with software initiators: Software initiators for iSCSI are built into vSphere as a storage adapter; however, to use them you must first configure a VMkernel port group on one of your virtual switches (vSwitches). The software iSCSI networking for vSphere leverages the VMkernel interface to connect to iSCSI targets, and all network traffic between the host and target occurs over the NICs assigned to the vSwitch the VMkernel interface is located on. You can have more than one VMkernel interface on a single vSwitch or multiple vSwitches. The VMkernel interface is also used for VMotion, fault-tolerance logging traffic and connections to NFS storage devices. While you can use one VMkernel interface for multiple things, it's highly recommended to create a separate vSwitch and VMkernel interface exclusively for iSCSI connections. You should also have two NICs attached to the vSwitch for failover and multi-pathing. If you have multiple NICs and VMkernel interfaces, you should make sure you bind the iSCSI VMkernel interfaces to the correct NICs. (See VMware's iSCSI SAN Configuration Guide for more information.)

Once the vSwitch and VMkernel interface is configured, you can configure

Requires Free Membership to View

the software iSCSI adapter. Select Configuration/Storage Adapters in the vSphere Client to see the software iSCSI adapter listed; select it and click Properties to configure it. On the General tab, you can enable the adapter and configure CHAP authentication (highly recommended). On the Dynamic Discovery tab, you can add IP addresses to have iSCSI targets automatically discovered; optionally, you can use the Static Discovery tab to manually enter target names. After entering this information, go back to the Storage Adapters screen and click on the Rescan button to scan the device and find any iSCSI targets.

Configuring with hardware initiators: The process is similar for hardware initiators, but they don't use the VMkernel networking, so that step can be skipped. TOE adapters are technically network adapters, but they'll show up on the Storage Adapters screen instead. Select them, click Properties and configure them in a manner similar to software initiators by entering the appropriate information on the General, Dynamic Discovery and Static Discovery tabs. You'll need to assign IP addresses to the TOEs on the General screen as they don't rely on the VMkernel networking.

Once the initiators are set up and your iSCSI disk targets have been discovered, you can add them to your hosts as VMFS volumes. Select a host, click on the Configuration tab and choose Storage. Click Add Storage and a wizard will launch; for the disk type select Disk/LUN, which is for block-based storage devices. (The Network File System type is used for adding file-based NFS disk storage devices.) Select your iSCSI target from the list of available disks, give it a name and then choose a block size. When you finish, the new VMFS data store will be created and ready to use.

Performance testing: iSCSI plus vSphere

It's a good idea to do some benchmarking of your iSCSI storage device to see the throughput you'll get under different workload conditions and to test the effects of different vSphere configuration settings.

Iometer is a good testing tool that lets you configure many different workload types. You can install and run Iometer inside a virtual machine (VM); for best results, create two virtual disks on the VM: one on a local data store for the operating system and another on the iSCSI data store to be used exclusively for testing. Try to limit the activity of other VMs on the host and access to the data store while the tests are running. You can find four prebuilt tests that you can load into Iometer to test both max throughput and real-world workloads at www.mez.co.uk/OpenPerformanceTest.icf.

We ran Iometer tests using a modest configuration consisting of a Hewlett-Packard Co. ProLiant ML110 G6 server, a Cisco Systems Inc. SLM2008 Gigabit Smart Switch and an Iomega ix4-200d iSCSI array.

The test results, shown below, compare the use of the standard LSI Logic SCSI controller in a virtual machine and use of the higher performance Paravirtual SCSI controller. The tests were performed on a Windows Server 2008 VM with 2 GB RAM and one vCPU on a vSphere 4.0 Update 1 host; tests were run for three minutes. The results show the Paravirtual controller performing better than the LSI Logic controller; the difference may be more pronounced when using higher-end hardware.

Click here to get a PDF of the iSCSI Performance Test Results chart.

This was first published in August 2010

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: