For the purposes of this discussion, we'll assume iSCSI is the protocol chosen for VMware and that there's a base...
understanding of how to implement and operate it in a VMware environment. Let's examine some best practices for leveraging iSCSI in VMware environments.
Best practice #1: Remember that iSCSI is a storage network
Even though iSCSI runs on IP, it's probably running data traffic for a business-critical application in the environment and should be treated that way. In a smaller data center, mixing the protocol with normal traffic is probably acceptable. But for a data center that has any real size at all, iSCSI storage should be on its own network.
An iSCSI network can't be viewed as just something else running on the local-area network (LAN) and should be on its own network. This includes separate physical ports for iSCSI traffic and IP traffic in the host itself. While virtual LANs (VLANs) can help divide traffic, they can only do so much. If one protocol suddenly demands the majority of available I/O bandwidth, this can cause significant issues. An exception can be made for high-bandwidth 10 Gb Ethernet (10 GbE) connections, but users should still look for cards with some sort of QoS capabilities built into them.
Current technology allows physical network interface cards (NICs) and host bus adapters (HBAs) to be divided into virtual cards with independent channels. Some of these channels are hardware-based circuits that physically segregate traffic, so there's no potential of an I/O storm affecting the performance of other virtual machines (VMs) or protocols that share the card. These cards also allow for maximum utilization of the 10 GbE bandwidth.
Best practice #2: Consider using only software initiators in iSCSI
There are two iSCSI initiator choices. The first is a software initiator, which is typically a device driver built into the OS for performing the SCSI-to-IP translation. The initial concern with software initiators was that since IP already places a load on the host CPU, adding the SCSI to IP translation could be too much. However, as CPU processing power has increased, this concern has waned. Software initiators have also matured and are now proving to be quite efficient. In VMware, the iSCSI load places no more than a 5% overhead on the CPU.
The other type of iSCSI initiator is a hardware-based iSCSI HBA. Hardware initiators allow users to boot directly from the card, thereby eliminating the need for local storage and allowing advanced encryption capabilities. Hardware initiators can also boost performance because the HBAs should offload processing from the host.
Best practice #3: Consider using jumbo frames
Using a jumbo frame means setting the Ethernet frame to a larger size than the default frame size. This reduces the amount of frames the protocol interacts with in iSCSI, and decreases the amount of CPU that must be involved in the SCSI-to-IP translation. Jumbo frames also lessen the load on physical hosts, which is especially important when using software initiators.
In VMware ESX 3.5, jumbo frames were only supported at the VM level and at the kernel level, initially dampening their appeal. However, with VMware vSphere 4, jumbo frames are supported at both levels so maximum benefit can be achieved.
When implementing jumbo frames, there are several important configurations to consider:
- All hardware in the chain must support jumbo frames.
- Jumbo frames must be enabled as the vSwitch and vNICs (virtual versions of the physical hardware created by the hypervisor) are created.
- The capability can't be turned on later. If there's a move to jumbo frames later, then those virtual components must be deleted and recreated.
BIO: George Crump is the lead analyst at Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments.