Q

How will 10 GbE impact iSCSI speed?

The adoption of 10 gigabit Ethernet has ramped up iSCSI speed and resulted in a smaller performance gap between iSCSI and Fibre Channel.

How big of an impact does the adoption of 10 gigabit Ethernet have on iSCSI speed and what will even faster Ethernet mean for iSCSI?

The adoption of 10 Gigabit Ethernet (10 GbE) has definitely impacted the speed of iSCSI. Obviously, it's 10 times faster than 1 GbE. The other important thing is that while you might have previously had to do multipathing to get multiple 1 GbE connections, you can now consolidate on a 10 GbE connection. Because you have a pipe that’s 10 times the size, you see a significant performance improvement. You also get a little bit simpler management to a certain extent by going with 10 GbE.

With 1 GbE, it's not uncommon to see four, six or eight network interface card (NIC) ports in a server to handle data traffic and management functions. This also means that the same number of cables are connected to that server. In many cases, these multiple NIC ports are used for multipathing to get more overall bandwidth. If a dual-port 10 GbE NIC is used, the number of cables will be drastically reduced and additional bandwidth will be available, either in aggregate or for failover scenarios. The need for multipathing would be reduced, except for failover needs. But there's a requirement for 10 GbE NICs that's not required for 1 GbE NICs, and that's the type of slots in the server. A dual-port 10 GbE NIC requires a PCI Express (PCIe) 2.0 x8 slot in the server to get full bandwidth on both ports. A single-port 10 GbE NIC requires a PCIe 2.0 x4 or PCIe 1.0 x8 slot.

The Institute of Electrical and Electronics Engineers (IEEE) has already ratified IEEE 802.3ba, so 40 GbE and 100 GbE are available but not commonly used. Today, 40 GbE is pretty pricey and is typically only used in the switches in the core of the network -- you don't really see it on application servers or desktops. Once it becomes a little more pervasive and the prices come down on that equipment, you can certainly run iSCSI on it. To provide trunking between switches in the data center, 40 GbE and 100 GbE are used as inter-switch links.

This was first published in November 2012

Dig deeper on ISCSI SAN

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close