Just came across a pretty interesting resource on EMC’er Chad Sakac’s Virtual Geek blog (first brought to my attention by Stephen Foskett). It’s a guide to ESX and iSCSI co-developed by, among others, Andy Banta of VMware, Vaughn Stewart of NetApp, Eric Schott of Dell/EqualLogic, Adam Carter of HP/Lefthand, and David Black of EMC.
The post gets into nitty-gritty details and even includes what look like scanned-in napkin drawings to illustrate some of the complexities of performance management using ESX 3.x server with iSCSI. There are multiple links to futher resources on everything from the fundamentals of link aggregation to the full iSCSI spec.
But the bottom line for storage users is that “the ESX 3.x software initiator only supports a single iSCSI session with a single TCP connection for each iSCSI target…So, no matter what MPIO setup you have in ESX, it doesn’t matter how many paths show up in the storage multipathing GUI for multipathing to a single iSCSI Target, because there’s only one iSCSI initiator port.”
There are ways around it–in short, the post states, “Use the ESX iSCSI software initiator. Use multiple iSCSI targets. Use MPIO at the ESX layer. Add Ethernet links and iSCSI targets to increase overall throughput. Ser your expectation for no more than ~160MBps for a single iSCSI target.”
There’s also a workaround for single LUNs needing more than 160 MBps, using an iSCSI initiator in the guest along with MPIO, though the post acknowledges, “It has a big downside…you need to manually configure the storage inside each guest, which doesn’t scale particularly well from a configuration standpoint – so for most customers [say] they stick with the ‘keep it simple’ method.”
The best news out of this post for VMware and iSCSI users, though, is probably the pre-announcement that this behavior will be changing in future ESX releases.