News Stay informed about the latest enterprise technology news and product updates.

The University of Arizona consolidates networks with Fibre Channel over Ethernet (FCoE)

University reduces Ethernet cabling and saves FC switch ports by consolidating with Cisco Nexus FCoE switches, but end-to-end FCoE to storage arrays remains a long way off.

Although Fibre Channel over Ethernet (FCoE) implementations are in early days, The University of Arizona recently took the plunge with Cisco Systems Inc. Nexus FCoE switches and MDS Fibre Channel (FC) directors connected to Dell Inc. servers and EMC Corp. storage arrays.

Derek Masseth, senior director, infrastructure services at the university, estimates the institution saved more than $1 million in capital expenses, simplified cable management and saved power while upgrading to 10 Gigabit Ethernet (10 GbE). In addition, Masseth said the upgrade saved him from having to add FC switching and put the university in position for potential end-to-end FCoE to the storage array down the road, although he isn't sure yet if that would be worthwhile.

As part of an overhaul of all of its enterprise resource planning (ERP) systems that began in early 2008, the university installed 24 top-of-the-rack Nexus 5010 switches that support FCoE, and runs them though two Nexus 7010 core switches in the data center. Another Nexus 7010 switch sits in a disaster recovery site. The Nexus 5010 also connects to the school's storage-area network (SAN) through Cisco MDS 9509 FC directors. The university is using QLogic converged network adapters (CNAs) with the Nexus FCoE switches.

Masseth said the university conducted rigorous testing of Nexus 5020 switches last February, put Nexus 5010 and 7000 devices in production in June, and had them "racked, stacked and running data" in July.

More on FCoE storage
NetApp goes native with FCoE storage
IBM sells FCoE gear from Brocade, Cisco 

Brocade expands battlefield with Cisco to encompass Data Center Ethernet
The ERP project included 100 new Dell servers. "We thought, 'What would it take to support that?'" Masseth said. "We realized it might be a good time to take a look at next-generation infrastructure. That led us down the path to looking at Nexus and FCoE."

He said the university reduced capital expenses by 50% -- up to $1.2 million -- "right out of the gate" by moving to a top-of-the-rack switching architecture with Nexus 5010s, and "dramatically" simplified cable management and operational expenses. His staff projects a 30% reduction in power consumption with the new setup.

FCoE not end-to-end to storage yet

The University of Arizona has more than 300 TB of data on a variety of EMC Clariion storage arrays. Fibre Channel over Ethernet doesn't yet extend to the storage, but Masseth said consolidating his server connectivity helped him avoid having to add FC switches.

"With this project we would have exhausted capacity to our MDSs, but being able to aggregate Fibre Channel into the rack and run a couple of cables back into directors saved us from having to make a move there to access layer switching," he said. "We recovered ports by doing it this way."

End-to-end FCoE isn't expected to be mainstream before 2011. Masseth said he doesn't see much need for FCoE to his storage array now, but could in the future if he doesn't lose any services currently available to him with his MDS directors, such as virtual SANs (VSANs).

"I can see the potential of FCoE all the way to the array," he said. "I think it's on EMC's roadmap, and it's on Cisco's roadmap to get FCoE in Nexus 7000s. But it will really come down to breadth of feature set support. There are storage services we need, and they'll have to be available for us to attach FCoE to storage. We're a ways off from attaching Clariions into Nexus 7000s and decommissioning MDSs."

Orchestrating storage and network management

One hurdle seen in a move to Fibre Channel over Ethernet is a struggle between the network and storage teams over control of the infrastructures. Masseth said both of those teams report to him, and he took pains to alleviate friction between them.

"That's an interesting part of the deployment," he said. "You have to go into this convergence idea with your eyes wide open. Ours is a relatively small shop, but that was still one of our most substantial hurdles even though both teams report to me, and I can sit them down in a room to iron out who's responsible for what."

Masseth said education played a key role in keeping the teams on the same page. He added that the process required some "orchestration" on his part.

"It's a pretty dramatic learning curve on both sides of the fence, although probably less dramatic than when we put voice on IP," he said. "It's a culture shift. Early on, it was difficult for either team to recognize the need to work together. I had to issue mandates like, 'Thou shalt go to the same training.' Once we orchestrated the right events, it didn't require much refereeing."

Standards still in progress

Another potential hurdle is that standards for FCoE and the enhanced data center Ethernet protocols required to run FCoE are immature and, in some cases, not finalized. Masseth said he kept a close eye on the standards developments, and did a lot of testing before putting Fibre Channel over Ethernet gear in production.

"I'd be lying if I said it didn't give me pause, but we were hammering on devices in the lab, getting good support from Cisco, and watching standards with baited breath," he said. "We realized we were taking a little bit of a gamble, but for the benefits we saw and the confidence we had in the standards track, it was easy to take a little risk up front. Really, based on our experience in the lab, we weren't taking any risk because we knew it would work."

Dig Deeper on Unified storage

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.