How big of an impact does the adoption of 10 gigabit Ethernet have on iSCSI speed and what will even faster Ethernet...
mean for iSCSI?
The adoption of 10 Gigabit Ethernet (10 GbE) has definitely impacted the speed of iSCSI. Obviously, it's 10 times faster than 1 GbE. The other important thing is that while you might have previously had to do multipathing to get multiple 1 GbE connections, you can now consolidate on a 10 GbE connection. Because you have a pipe that’s 10 times the size, you see a significant performance improvement. You also get a little bit simpler management to a certain extent by going with 10 GbE.
With 1 GbE, it's not uncommon to see four, six or eight network interface card (NIC) ports in a server to handle data traffic and management functions. This also means that the same number of cables are connected to that server. In many cases, these multiple NIC ports are used for multipathing to get more overall bandwidth. If a dual-port 10 GbE NIC is used, the number of cables will be drastically reduced and additional bandwidth will be available, either in aggregate or for failover scenarios. The need for multipathing would be reduced, except for failover needs. But there's a requirement for 10 GbE NICs that's not required for 1 GbE NICs, and that's the type of slots in the server. A dual-port 10 GbE NIC requires a PCI Express (PCIe) 2.0 x8 slot in the server to get full bandwidth on both ports. A single-port 10 GbE NIC requires a PCIe 2.0 x4 or PCIe 1.0 x8 slot.
The Institute of Electrical and Electronics Engineers (IEEE) has already ratified IEEE 802.3ba, so 40 GbE and 100 GbE are available but not commonly used. Today, 40 GbE is pretty pricey and is typically only used in the switches in the core of the network -- you don't really see it on application servers or desktops. Once it becomes a little more pervasive and the prices come down on that equipment, you can certainly run iSCSI on it. To provide trunking between switches in the data center, 40 GbE and 100 GbE are used as inter-switch links.
Dig Deeper on SAN technology and arrays
Related Q&A from Dennis Martin
RDMA technology can help speed up I/O in storage environments by bypassing copy processes in the software stack when data is called up from a ... Continue Reading
Remote Direct Memory Access is a good way to reduce latency in flash environments and works with InfiniBand and some Ethernet connections. Continue Reading
Dennis Martin of Demartek discusses creating DIY hybrid SSD arrays by adding flash drives to an existing array. Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.