In this Storage Decisions 2012 presentation, Demartek president Dennis Martin discusses the future of Ethernet as it relates to 10 GbE, 40 GbE and 100 GbE connections.
The 10 Gigabit Ethernet (10 GbE) specification was ratified 10 years ago. Back then, early adopters were basically doing switch-to-switch because you needed it in the core. We're expecting 10 GbE to pick up pace [based on user adoption] numbers in the last two to three years. If you have blades, a lot of blade chassis have 10 GbE in them.
There are two types of connectors. There's what I call the "SFP" [small form-factor pluggable] style and the "RJ45" style. The RJ45 is the typical Ethernet 1 GbE cable you're used to, while the SFP is the transceivers and all that kind of stuff. 10 GBase-T is the RJ45 for 10 GbE -- it's more recent, but it adds a new wrinkle here. Some switches support both types of connectors while some only support one. When 10 GbE first came out, it was not backward-compatible with 1 GbE, at least from a physical connection and cabling layer. Ethernet is still Ethernet, but the physical part just wasn't backward-compatible. Now we're starting to see some [opportunities] to mix 1 GbE and 10 GbE.
Let's talk about 1 GbE. How many of you have more than one network interface card (NIC) port in your server? [The usual amount is] four, six, eight, maybe more, but you get a lot of cables coming out the back of that server. There are a lot of ways you can go with 1 GbE -- dual port, quad port and so on. When you look at the back of those servers, what does that cabling look like? It's a real rat's nest, right? And of course, these [NICs] consume slots in servers, so if you have these dual-port or quad-port cards, they take up a lot of slots.
10 GbE is 10 times the bandwidth of 1 GbE, so it's 10 times as fast. A dual-port 10 GbE NIC can give you everything you had with your 1 GbE card, but you have failover as well. Anybody moving to the small form-factor servers, the 1U or the twins where there are two servers in 1U? The downside of that is there are hardly any slots. If you need to get a lot of network connectivity and you only have one or two slots, what are you going to do? 10 GbE is a nice way to do that because you have a lot more bandwidth available.
If you're doing single-port or dual-port 10 GbE NICs, this is what you need in the slots: PCI Express (PCIe) 2.0 x8, or PCIe 1.0 x8 if you have eight slots for a single port.
Let's talk about the future of Ethernet because 40 GbE has been ratified as a specification, as has 100 GbE. The fastest single-lane connection for Ethernet is 10 Gbps. Today, when you do 40 GbE, it's a special cable that does four pairs of fibers [that do 10 Gbps each]. Then you have QSFP, which is a quad SFP [connection]. So to get to 40 GbE today, you take four lanes of [10 Gbps] and run them together. You can get to 100 GbE by taking 10 lanes of 10 Gbps.
The next bump in actual channel speed is 25 Gbps, and 25 Gbps connectors are expected late this year or even into next year. When I say "expected," I mean they'll be available at least in the labs, and component companies will start playing with them. These are called "25/28 G" because they actually run at 28 Gbps. From an Ethernet perspective, when you get to this, then you can take those same four lanes you had before, but now you can get 100 GbE. It would be four times 25 Gbps. If you want 10 lanes, then you can, of course, get 250 Gbps. If you get 16 lanes, then you can get 400 Gbps. There's a lot of ways to bundle that.
From an end-user perspective, when will you be able to buy these products? It could be the end of next year, the middle of next year or maybe even after that. So if you're planning your next data center or build-out, some of this is still more than a year away. You might have to start thinking about where 25 Gbps will play in Ethernet's future.