Misc

Network architecture considerations for FCoE

The University of Arizona's Fibre Channel over Ethernet (FCoE) deployment illustrates how an incremental top-of-rack strategy can prove advantageous. The Tucson-based school faced a major enterprise storage systems replacement project, with plans to add more than 100 servers and two storage arrays.

Sticking with its existing data center infrastructure, the IT organization estimated the price tag would hit $1.2 million just to upgrade the cabling and networking, which consisted of Cisco Systems Inc.'s Catalyst Ethernet switches and Cisco MDS 9500 FC directors.

More on FCoE

How to get from Fibre Channel to Fibre Channel over Ethernet

What you need to deploy FCoE: A storage Checklist

Enterprise data storage requirements for FCoE

By contrast, the university projected the cost of shifting to 10 Gigabit Ethernet (10 GigE) at the access layer at $600,000. Factored into the equation was the purchase of twinax cabling, QLogic Corp. converged network adapters (CNAs), and two dozen Cisco Nexus 5010 top-of-rack switches connected to a pair of new modular Nexus 7000 switches, which serve as a 10 GigE aggregation point.

"With that kind of savings, it's difficult not to go that way, especially in today's economy," said Derek Masseth, senior director of infrastructure services at the university. He admitted to being nervous about committing to such nascent technology, but the university got ample help from Cisco whenever it asked.

Masseth said the school was able to reduce its per-server cable count from 10 to two in some cases, consolidate its switch ports and lower power consumption by 30%. Another benefit was greater scalability for its storage area networks (SAN) and Ethernet networks thanks to the new Nexus architecture, he said.

With the old architecture, the university's servers/host bus adapters (HBAs) connected directly into its big MDS chassis and then to its EMC Corp. Clariion arrays. The Ethernet traffic went from the network interface cards (NICs) to the Catalyst switches and then to the campus network.

In the new architecture, CNAs replace the NICs and HBAs and connect to the top-of-rack Nexus 5010s, which splits the Ethernet and FC traffic. The 10 GigE traffic uplinks to the core Nexus 7000 switches and the FC traffic continues to the existing MDS directors and onto the Clariion disk arrays at 4 Gbps.

Masseth said he remains committed to Fibre Channel storage for mission-critical data, but to save money, he would consider eliminating the MDS switches in favor of the Nexus 7000 once the latter adds support for Fibre Channel over Ethernet -- assuming Cisco is able to equip the Nexus 7000 with a comparable feature set to the MDS.

"Then all we have to do is uplink into the 7Ks and let the 7K crack the Fibre Channel off the Ethernet," said Masseth. But, he admitted he's waiting with bated breath since "the MDSes have a pretty broad feature set and feature disparity is a primary concern."

 

This was first published in November 2009

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: