Enterprise data storage vendors such as Hewlett-Packard (HP) Co.'s LeftHand Networks and Dell EqualLogic have cited the additional costs of networked storage requirements as a barrier to server virtualization adoption for some customers in positioning iSCSI SAN products for that purpose, and tout the relatively low cost of iSCSI SANs compared with Fibre Channel (FC). But according to Jeff Boles, senior analyst and director, validation services at Hopkinton, Mass.-based Taneja Group, there are some technical considerations that make iSCSI more appealing for virtual servers as well.
"A lot of engineering went into Fibre Channel based on the assumption of one host per port," Boles said. "iSCSI has virtualized access anyway, over an IP connection, and has had more engineering around multiple-host contention and various queuing patterns."
While Ethernet networks and the basic best practices for iSCSI SANs are generally well understood by now, if you're looking to deploy an iSCSI SAN to support server virtualization, experts say there are some different factors to keep in mind than when connecting physical servers via iSCSI. Here are five best practices for using iSCSI in a virtual server environment.
Best practice #1: Look beyond basic iSCSI
In the years since iSCSI first came on the scene, products have had time to mature and develop, adding specialized features along the way. In the meantime, iSCSI-related products have proliferated to the point where software-based iSCSI initiators and targets can be had completely free of charge. iSCSI SANs can be built using commodity server hardware and open source software as well.
But Boles said iSCSI specialists, like HP's LeftHand or Dell EqualLogic, are charging a premium for advanced features such as integrated VMware snapshots. Other iSCSI SAN vendors, such as EMC Corp. and NetApp Inc., offer unified storage arrays with various options for connecting servers, including iSCSI. Disk arrays from storage specialist vendors also often have features like quality of service and virtual machine-aware management consoles.
The iSCSI network these arrays are attached to can also make a difference, Boles said. "If you have the right infrastructural underpinnings, for example a well-built, fully managed Cisco environment, you can apply more sophisticated and granular policies to virtual servers."
On the other hand, some of the most advanced iSCSI deployment methods aren't really necessary for a virtual server environment where cost and consolidation are primary factors in purchasing decisions, countered Greg Schulz, founder and analyst at Stillwater, Minn.-based StorageIO Group. As data grows and 10 Gigabit Ethernet (10 GbE) looms on the horizon, some industry experts see technologies like TCP/IP offload engines (TOE cards) coming into play.
But users should balance the availability of these performance enhancers with their original rationale for deployment, Schulz said. "If low cost is the reason I'm deploying iSCSI, I'm probably not going to invest in hardware adapters. Instead, I might want to enable jumbo frames and quality-of-service features through software."
Best practice #2: Consider where iSCSI targets should live in the virtual environment on an application-by-application basis
For VMware environments specifically, "It used to be users had to make a tough choice," Schulz said, between VMware's clustered file system (VMware vStorage VMFS) or raw device mapping (RDM). Before Version 3.5, VMFS offered features like VMotion, but RDM was sometimes the only way to continue to use value-added features of storage arrays like snapshots and virtual provisioning.
While this is no longer the case today, Brian Garrett, vice president of ESG Labs at Milford, Mass.-based Enterprise Strategy Group (ESG), said users should still evaluate where to place the iSCSI target in the infrastructure for performance and manageability reasons. They have a choice of deploying the target as either a virtual disk at the hypervisor level, allowing the server virtualization software to handle calls to the back-end storage through a virtual hard disk layer; or at the disk array, providing somewhat speedier block-based access to the back-end storage.
"The decision will depend in part on what you're already used to," Garrett said. "But block-based apps like SQL databases, for example, work well with raw disks, and would probably be suited to the pass-through or raw mode."
Best practice #3: Rethink network and cabling designs
"One thing users often don't think about is the way iSCSI can give you freedom from past paradigms," Taneja Group's Boles said. Storage pros are used to the Fibre Channel world, where a monolithic disk array is attached via a complex series of switches and cables to servers in a separate aisle of the data center.
With an increase in scale-out and commodity-hardware-based iSCSI SAN architectures, Boles said a new networked storage deployment might also be a good opportunity to rethink the data center layout. "With some of these iSCSI systems, you can interleave the storage with the server farm, and get the storage closer to the server environment without as many long cables."
Rethinking the physical placement of resources in the data center can help resolve issues with overloading parts of the network. "You don't have to shove I/O down a big trunk and then fan-out to the entire infrastructure – interleaving can avoid these bottlenecks," he added.
Best practice #4: Be mindful of monitoring
Boles and Garrett both emphasized that the new virtual world requires new virtualization-aware monitoring tools throughout the data center infrastructure, particularly as highly portable virtual machines (VMs) move around the network. "When you get into a virtual environment, performance monitoring and tuning become a lot more important," ESG Labs' Garrett said. "In the physical world it was easier to make sure you had the right number of actuators to avoid overconsolidating and violating basic storage guidelines."
Added Taneja Group's Boles: "It's easier to implement monitoring from Day 1 than to go back and retrofit a network fabric with monitoring tools; make purchasing decisions with this in mind."
Best practice #5: 10 Gigabit Ethernet remains a ways off
The next boost in Ethernet bandwidth will probably improve iSCSI performance and offer more network consolidation opportunities within data centers, and the transition to 10 Gigabit Ethernet will begin imminently, according to Rick Villars, vice president, storage systems and executive strategies at IDC in Framingham, Mass. "This will be the year server vendors tell people to go to 10 Gigabit Ethernet," he said.
But Villars urged caution when it comes to porting iSCSI SANs to 10 GbE networks too soon, particularly if you're dealing with implementing a virtual server environment already. "You have to decide whether iSCSI is the first or the last thing you want to bring on [to a new 10 GbE network]," Villars said. "Since it's in the early stages, I wouldn't want to go out and start with an iSCSI SAN on [10 GbE] yet."
This was first published in February 2010