Manage Learn to apply best practices and optimize your operations.

10 Gigabit Ethernet technology, data center move offer storage options to law firm

Law firm ties upgrade to 10 Gigabit Ethernet technology with data center move, shift to NAS storage and purchase of the Cisco Unified Computing System (UCS)

In theory, upgrading to 10 Gigabit Ethernet technology might be a snap for an IT shop that already uses block-based iSCSI or file-based NAS storage with Gigabit Ethernet. But if the shift coincides with a data center move or the purchase of unfamiliar technology, the IT team might need to factor in more time and resources to accommodate the learning curve.

McKenna Long & Aldridge (MLA) LLP, an international law firm with 10 offices and 475 attorneys, synchronized its move to 10 GbE with a migration to a new data center in Reston, Va. MLA also purchased Cisco Systems Inc.'s Unified Computing System ( Cisco UCS) and Nexus 5010 switches, and decided to change from NetApp Inc. iSCSI to NFS-based storage.

MLA spent months working with Cisco and NetApp to design its data center. But even with Cisco, NetApp and reseller CDW Corp. lending a hand, the law firm's IT staff still had to put in the effort to learn how to use the new equipment.

MLA kept its storage network topology intact with dual paths to everything, but nearly all of the component pieces changed with its move to 10 Gigabit Ethernet technology. The purchase order included new twinax copper and fiber optic cables, SFP+ transceivers, network adapters, blade servers, top-of-rack switches and storage.

"We're very familiar with Ethernet, so that was easy, physically connecting everything. On the NetApp side, that was fine," said Sam Robin, senior manager of enterprise systems at MLA. "But not being familiar with the Cisco UCS, there was a lot to learn. That was our biggest challenge."

Cisco's UCS includes a blade server chassis with either half-blade or full-blade servers, a 20-port or 40-port switch (or fabric interconnect), a fabric extender and a device manager. Nexus 5010 switches use an operating system different from Cisco's other Ethernet switches, so MLA's IT staff had to adjust to changes on the command line interface and the method for trunking ports together.

The law firm also experienced a bit of culture shock when using the blade servers for the first time in May. Each of its two half blades had only a single converged network adapter (CNA) and two Ethernet ports, and each of its two full blades had two CNAs and four Ethernet ports.

"We paused and said, 'Wow, is this going to be [enough]?" Robin recalled. "If you look at VMware in the Gigabit environment, you throw four, six NICs at it and that way you can split off your VMotion traffic. You can split off your service console and your LAN traffic, the iSCSI traffic. This is all bundled into one pipe."

All worked just fine for the I/O that MLA requires. The IT team had used capacity planning tools to gauge its network and disk I/O needs, so it knew the bandwidth would be able to handle its systems with ease.

MLA's old data center in Ashburn, Va., utilized 4 Gbps Fibre Channel with EMC Corp. Clariion disk arrays, but the company purchased two NetApp FAS2040 NAS boxes nearly two years ago for its Washington and Los Angeles offices.

"If we didn't already have the iSCSI experience with the other two NAS systems, we probably would have stuck with Fibre [Channel]," Robin said. "But because we did, we felt comfortable doing it."

MLA invested in a pair of NetApp FAS3140s with the data center move. The decision to shift from iSCSI to NFS storage represented an adjustment, as the law firm went from using VMDK files with its VMware Inc. virtual server environment to using mount points with NFS, Robin said. The company controls the NAS storage through its VMware ESX host.

"That's just a simple couple of clicks," Robin said. "Instead of selecting iSCSI, you select NFS and then you give it the IP address. Not too complicated."

The reason MLA decided to switch from iSCSI to NFS centered on the VMDK files. NFS data stores can handle hundreds of virtual machine (VMs) per volume compared with 20 to 25 VMs on an iSCSI-based LUN, Robin said.

"This made a lot of sense to us because various NetApp operations like deduplication, snapshot and SnapMirror happen at the volume/LUN level," Robin explained. "This allows us to obtain optimal deduplication ratios and gives us more flexibility with NetApp's SnapMirror and snapshots features. Other benefits include ease of configuration, management and provisioning."

Was move to 10 GbE the right decision?

If the IT team could turn back the clock, the one change it would probably make is the cabling between the NAS storage and the Nexus 5010 switches. NetApp recommended that the law firm go with fiber optic cables to give MLA the option to move to Fibre Channel over Ethernet (FCoE) in the future, especially since its network adapters can handle FCoE, iSCSI or NAS.

But the decision to use fiber cables meant MLA also had to purchase eight SFP+ transceivers. The price of the fiber optic cable didn't bother Robin, but the cost of the SFP+ transceiver modules did, at $700 to $800 apiece.

"We probably wouldn't have done it if we knew how expensive the transceivers were," Robin said. "But this [overall price] quote was massive, with lots of parts in it, and we just missed how much those cost."

Twinax copper is limited to shorter distances than fiber optic cables, but since the NAS storage is in close proximity to the Nexus 5010 in the next rack, Robin said MLA could have gotten away with copper, especially since it likely won't use Fibre Channel over Ethernet.

"It would have to be some very, very unique application," he said.

Technically speaking, MLA probably could have foregone 10 GbE too because its existing systems ran fine on Gigabit Ethernet and 4 Gbps Fibre Channel.

"We had room there. We could have stuck with it," Robin said. "The idea was to future-proof the data center."

The faster 10 Gigabit Ethernet technology did have its advantages. Servers that once had 11 cables now have only a few and much improved airflow. Backups and restores that once took an hour or more are now completed in minutes. Some back-end applications are seeing huge performance gains, according to Robin.

"We're happy with the storage, the storage-area network, the UCS, all of it," Robin said. "We're very confident that this system can handle anything we throw at it."

Dig Deeper on Data center storage

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.