CHICAGO — The emergence of Ethernet alongside Fibre Channel (FC) in storage networking requires careful planning...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
and new strategies for future implementations, according to presenters on the opening day of the Storage Decisions conference.
"Welcome to storage networking; it changed while we weren't looking," said Howard Marks, a consultant and chief scientist at DeepStorage.net, during a session called "Decision Time: Storage Networking Technology."
Marks detailed the major drivers of that change. One is the way server virtualization requires a network upgrade and shared storage, and another is the coming convergence of Fibre Channel and Ethernet networks into Fibre Channel over Ethernet (FCoE).
Network convergence: Change comes slowly
Martin said a big problem of today's non-converged world is that running separate Fibre Channel and Ethernet networks requires many parts, especially when you include cabling and adapter cards.
"What's the bang for your buck [with FCoE]?" he said. "You get speed but also fewer pieces of infrastructure."
The possible tradeoffs are that a single switch and single adapter architecture can leave you with a single point of failure, and infrastructure convergence requires that storage and networking teams work together instead of as separate departments.
Martin cautioned that the road to true convergence will be a slow process, and not to expect many FCoE storage targets until 2011 or later. Still, this could be the time to plan your converged network.
"Fibre Channel infrastructure changes slowly," he said. "When you build out a Fibre Channel SAN, do you expect to upgrade it in a year? This is a long-term thing. When you build out a data center, that's the time to think about this."
16 Gbps Fibre Channel, 100 Gigabit Ethernet and what about InfiniBand?
Martin said the roadmap includes several speed boosts from the current 8 Gbps Fibre Channel and 10 GbE. He pointed out that while disk drives vendors will support 6 Gbps SAS instead of future generations of Fibre Channel, 16 Gbps Fibre Channel is on its way as a network interface with 32 gig Fibre Channel to follow. "When somebody says Fibre Channel is going away, ask them if they're talking about the interface side or the disk drive side," he said.
On the Ethernet side, he said 40 Gigabit and 100 Gigabit Ethernet will provide "another speed boost. It's all about how fast do you want to go, how much speed do you need?" FCoE will follow the Ethernet roadmap for speed increases, and Martin predicted 10 GbE Ethernet and FCoE will be embedded in server motherboards down the road. For those who don't want to wait for speed, InfiniBand is already at 40 Gbps and on its way to 80 Gbps.
Martin said the increase in speed will also require changes in cabling from copper to optical. "Cable deployments change very slowly, so choose 10 GigE cabling wisely," he advised.
DeepStorage's Marks agreed that while FCoE is showing up in network racks today, it isn't quite mature yet for storage and represents a hot political issue. "FCoE is usable today, but you must have a top-of-rack switch on top of every rack," he said. "And as you converge with hardware, you have to converge your people and that's probably the hardest part," he said.
Marks said he doesn't expect the Fibre Channel roadmap to extend past 16 Gbps. "I don't expect 32 gig Fibre Channel to make it out the door," he said. "Forty gig Ethernet and FCoE will be used by that point."
Marks also looked at what he called "weird" storage network options, including direct-attached storage (DAS). He said while DAS is making a comeback for applications such as Microsoft Exchange, it has limitations such as the inability to run VMotion. He also explored "oddball" types of storage such as Coraid's ATA over Ethernet (AOE), HyperSCSI and Zetera Corp.'s Z-SAN Storage over IP (SoIP) that use various types of Ethernet. "These are all designed to minimize CPU utilization," he said. "But they're minimizing utilization of a non-scarce resource, so who cares?"